14 resultados para Data-representation

em Indian Institute of Science - Bangalore - Índia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biological motion has successfully been used for analysis of a person's mood and other psychological traits. Efforts are made to use human gait as a non-invasive mode of biometric. In this reported work, we try to study the effectiveness of biological gait motion of people as a cue to biometric based person recognition. The data is 3D in nature and, hence, has more information with itself than the cues obtained from video-based gait patterns. The high accuracies of person recognition using a simple linear model of data representation and simple neighborhood based classfiers, suggest that it is the nature of the data which is more important than the recognition scheme employed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The applicability of a formalism involving an exponential function of composition x1 in interpreting the thermodynamic properties of alloys has been studied. The excess integral and partial molar free energies of mixing are expressed as: $$\begin{gathered} \Delta F^{xs} = a_o x_1 (1 - x_1 )e^{bx_1 } \hfill \\ RTln\gamma _1 = a_o (1 - x_1 )^2 (1 + bx_1 )e^{bx_1 } \hfill \\ RTln\gamma _2 = a_o x_1^2 (1 - b + bx_1 )e^{bx_1 } \hfill \\ \end{gathered} $$ The equations are used in interpreting experimental data for several relatively weakly interacting binary systems. For the purpose of comparison, activity coefficients obtained by the subregular model and Krupkowski’s formalism have also been computed. The present equations may be considered to be convenient in describing the thermodynamic behavior of metallic solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A forest of quadtrees is a refinement of a quadtree data structure that is used to represent planar regions. A forest of quadtrees provides space savings over regular quadtrees by concentrating vital information. The paper presents some of the properties of a forest of quadtrees and studies the storage requirements for the case in which a single 2m × 2m region is equally likely to occur in any position within a 2n × 2n image. Space and time efficiency are investigated for the forest-of-quadtrees representation as compared with the quadtree representation for various cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volumetric method based adsorption measurements of nitrogen on two specimens of activated carbon (Fluka and Sarabhai) reported by us are refitted to two popular isotherms, namely, Dubunin−Astakhov (D−A) and Toth, in light of improved fitting methods derived recently. Those isotherms have been used to derive other data of relevance in design of engineering equipment such as the concentration dependence of heat of adsorption and Henry’s law coefficients. The present fits provide a better representation of experimental measurements than before because the temperature dependence of adsorbed phase volume and structural heterogeneity of micropore distribution have been accounted for in the D−A equation. A new correlation to the Toth equation is a further contribution. The heat of adsorption in the limiting uptake condition is correlated with the Henry’s law coefficients at the near zero uptake condition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic identification of software faults has enormous practical significance. This requires characterizing program execution behavior and the use of appropriate data mining techniques on the chosen representation. In this paper, we use the sequence of system calls to characterize program execution. The data mining tasks addressed are learning to map system call streams to fault labels and automatic identification of fault causes. Spectrum kernels and SVM are used for the former while latent semantic analysis is used for the latter The techniques are demonstrated for the intrusion dataset containing system call traces. The results show that kernel techniques are as accurate as the best available results but are faster by orders of magnitude. We also show that latent semantic indexing is capable of revealing fault-specific features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The different formalisms for the representation of thermodynamic data on dilute multicomponent solutions are critically reviewed. The thermodynamic consistency of the formalisms are examined and the interrelations between them are highlighted. The options are constraints in the use of the interaction parameter and Darken's quadratic formalisms for multicomponent solutions are discussed in the light of the available experimental data. Truncatred Maclaurin series expansion is thermodynamically inconsistent unless special relations between interaction parameters are invoked. However, the lack of strict mathematical consistency does not affect the practical use of the formalism. Expressions for excess partial properties can be integrated along defined composition paths without significant loss of accuracy. Although thermodynamically consistent, the applicability of Darken's quadratic formalism to strongly interacting systems remains to be established by experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an algorithm for constructing the solid model (boundary representation) from pout data measured from the faces of the object. The poznt data is assumed to be clustered for each face. This algorithm does not require any compuiier model of the part to exist and does not require any topological infarmation about the part to be input by the user. The property that a convex solid can be constructed uniquely from geometric input alone is utilized in the current work. Any object can be represented a5 a combznatzon of convex solids. The proposed algorithm attempts to construct convex polyhedra from the given input. The polyhedra so obtained are then checked against the input data for containment and those polyhedra, that satisfy this check, are combined (using boolean union operation) to realise the solid model. Results of implementation are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article presents a generalized analytical expression for description of the integral excess Gibbs free energy of mixing of a ternary system. Twelve constants of the equation are assessed by the least mean squares regressional analysis of the experimental integral excess data of the constituent binaries; three ternary parameters are evaluated by a regressional analysis based on the partial experimental data of a component of the ternary system. The assessed values of the ternary parameters describe the nature of the ternary interaction in the system. Activities and isoactivities of the components in the Ag-Au-Cu system at 1350 K are calculated and found to be in good agreement with the experimental data. This analytical treatment is particularly useful to ternary systems where the thermodynamic data are available from different sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports the results of employing an artificial bee colony search algorithm for synthesizing a mutually coupled lumped-parameter ladder-network representation of a transformer winding, starting from its measured magnitude frequency response. The existing bee colony algorithm is suitably adopted by appropriately defining constraints, inequalities, and bounds to restrict the search space and thereby ensure synthesis of a nearly unique ladder network corresponding to each frequency response. Ensuring near-uniqueness while constructing the reference circuit (i.e., representation of healthy winding) is the objective. Furthermore, the synthesized circuits must exhibit physical realizability. The proposed method is easy to implement, time efficient, and problems associated with the supply of initial guess in existing methods are circumvented. Experimental results are reported on two types of actual, single, and isolated transformer windings (continuous disc and interleaved disc).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports the results of employing an artificial bee colony search algorithm for synthesizing a mutually coupled lumped-parameter ladder-network representation of a transformer winding, starting from its measured magnitude frequency response. The existing bee colony algorithm is suitably adopted by appropriately defining constraints, inequalities, and bounds to restrict the search space and thereby ensure synthesis of a nearly unique ladder network corresponding to each frequency response. Ensuring near-uniqueness while constructing the reference circuit (i.e., representation of healthy winding) is the objective. Furthermore, the synthesized circuits must exhibit physical realizability. The proposed method is easy to implement, time efficient, and problems associated with the supply of initial guess in existing methods are circumvented. Experimental results are reported on two types of actual, single, and isolated transformer windings (continuous disc and interleaved disc).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sparse representation based classification (SRC) is one of the most successful methods that has been developed in recent times for face recognition. Optimal projection for Sparse representation based classification (OPSRC)1] provides a dimensionality reduction map that is supposed to give optimum performance for SRC framework. However, the computational complexity involved in this method is too high. Here, we propose a new projection technique using the data scatter matrix which is computationally superior to the optimal projection method with comparable classification accuracy with respect OPSRC. The performance of the proposed approach is benchmarked with various publicly available face database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective in this work is to develop downscaling methodologies to obtain a long time record of inundation extent at high spatial resolution based on the existing low spatial resolution results of the Global Inundation Extent from Multi-Satellites (GIEMS) dataset. In semiarid regions, high-spatial-resolution a priori information can be provided by visible and infrared observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). The study concentrates on the Inner Niger Delta where MODIS-derived inundation extent has been estimated at a 500-m resolution. The space-time variability is first analyzed using a principal component analysis (PCA). This is particularly effective to understand the inundation variability, interpolate in time, or fill in missing values. Two innovative methods are developed (linear regression and matrix inversion) both based on the PCA representation. These GIEMS downscaling techniques have been calibrated using the 500-m MODIS data. The downscaled fields show the expected space-time behaviors from MODIS. A 20-yr dataset of the inundation extent at 500 m is derived from this analysis for the Inner Niger Delta. The methods are very general and may be applied to many basins and to other variables than inundation, provided enough a priori high-spatial-resolution information is available. The derived high-spatial-resolution dataset will be used in the framework of the Surface Water Ocean Topography (SWOT) mission to develop and test the instrument simulator as well as to select the calibration validation sites (with high space-time inundation variability). In addition, once SWOT observations are available, the downscaled methodology will be calibrated on them in order to downscale the GIEMS datasets and to extend the SWOT benefits back in time to 1993.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.