97 resultados para Unsupervised Classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general analysis of squeezing transformations for two-mode systems is given based on the four-dimensional real symplectic group Sp(4, R). Within the framework of the unitary (metaplectic) representation of this group, a distinction between compact photon-number-conserving and noncompact photon-number-nonconserving squeezing transformations is made. We exploit the U(2) invariant squeezing criterion to divide the set of all squeezing transformations into a two-parameter family of distinct equivalence classes with representative elements chosen for each class. Familiar two-mode squeezing transformations in the literature are recognized in our framework and seen to form a set of measure zero. Examples of squeezed coherent and thermal states are worked out. The need to extend the heterodyne detection scheme to encompass all of U(2) is emphasized, and known experimental situations where all U(2) elements can be reproduced are briefly described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three classification techniques, namely, K-means Cluster Analysis (KCA), Fuzzy Cluster Analysis (FCA), and Kohonen Neural Networks (KNN) were employed to group 25 microwatersheds of Kherthal watershed, Rajasthan into homogeneous groups for formulating the basis for suitable conservation and management practices. Ten parameters, mainly, morphological, namely, drainage density (D-d), bifurcation ratio (R-b), stream frequency (F-u), length of overland flow (L-o), form factor (R-f), shape factor (B-s), elongation ratio (R-e), circulatory ratio (R-c), compactness coefficient (C-c) and texture ratio (T) are used for the classification. Optimal number of groups is chosen, based on two cluster validation indices Davies-Bouldin and Dunn's. Comparative analysis of various clustering techniques revealed that 13 microwatersheds out of 25 are commonly suggested by KCA, FCA and KNN i.e., 52%; 17 microwatersheds out of 25 i.e., 68% are commonly suggested by KCA and FCA whereas these are 16 out of 25 in FCA and KNN (64%) and 15 out of 25 in KNN and CA (60%). It is observed from KNN sensitivity analysis that effect of various number of epochs (1000, 3000, 5000) and learning rates (0.01, 0.1-0.9) on total squared error values is significant even though no fixed trend is observed. Sensitivity analysis studies revealed that microwatershecls have occupied all the groups even though their number in each group is different in case of further increase in the number of groups from 5 to 6, 7 and 8. (C) 2010 International Association of Hydro-environment Engineering and Research, Asia Pacific Division. Published by Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we show that it is possible to reduce the complexity of Intra MB coding in H.264/AVC based on a novel chance constrained classifier. Using the pairs of simple mean-variances values, our technique is able to reduce the complexity of Intra MB coding process with a negligible loss in PSNR. We present an alternate approach to address the classification problem which is equivalent to machine learning. Implementation results show that the proposed method reduces encoding time to about 20% of the reference implementation with average loss of 0.05 dB in PSNR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part classification and coding is still considered as laborious and time-consuming exercise. Keeping in view, the crucial role, which it plays, in developing automated CAPP systems, the attempts have been made in this article to automate a few elements of this exercise using a shape analysis model. In this study, a 24-vector directional template is contemplated to represent the feature elements of the parts (candidate and prototype). Various transformation processes such as deformation, straightening, bypassing, insertion and deletion are embedded in the proposed simulated annealing (SA)-like hybrid algorithm to match the candidate part with their prototype. For a candidate part, searching its matching prototype from the information data is computationally expensive and requires large search space. However, the proposed SA-like hybrid algorithm for solving the part classification problem considerably minimizes the search space and ensures early convergence of the solution. The application of the proposed approach is illustrated by an example part. The proposed approach is applied for the classification of 100 candidate parts and their prototypes to demonstrate the effectiveness of the algorithm. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Segmental dynamic time warping (DTW) has been demonstrated to be a useful technique for finding acoustic similarity scores between segments of two speech utterances. Due to its high computational requirements, it had to be computed in an offline manner, limiting the applications of the technique. In this paper, we present results of parallelization of this task by distributing the workload in either a static or dynamic way on an 8-processor cluster and discuss the trade-offs among different distribution schemes. We show that online unsupervised pattern discovery using segmental DTW is plausible with as low as 8 processors. This brings the task within reach of today's general purpose multi-core servers. We also show results on a 32-processor system, and discuss factors affecting scalability of our methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Land cover (LC) refers to what is actually present on the ground and provide insights into the underlying solution for improving the conditions of many issues, from water pollution to sustainable economic development. One of the greatest challenges of modeling LC changes using remotely sensed (RS) data is of scale-resolution mismatch: that the spatial resolution of detail is less than what is required, and that this sub-pixel level heterogeneity is important but not readily knowable. However, many pixels consist of a mixture of multiple classes. The solution to mixed pixel problem typically centers on soft classification techniques that are used to estimate the proportion of a certain class within each pixel. However, the spatial distribution of these class components within the pixel remains unknown. This study investigates Orthogonal Subspace Projection - an unmixing technique and uses pixel-swapping algorithm for predicting the spatial distribution of LC at sub-pixel resolution. Both the algorithms are applied on many simulated and actual satellite images for validation. The accuracy on the simulated images is ~100%, while IRS LISS-III and MODIS data show accuracy of 76.6% and 73.02% respectively. This demonstrates the relevance of these techniques for applications such as urban-nonurban, forest-nonforest classification studies etc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural alignments are the most widely used tools for comparing proteins with low sequence similarity. The main contribution of this paper is to derive various kernels on proteins from structural alignments, which do not use sequence information. Central to the kernels is a novel alignment algorithm which matches substructures of fixed size using spectral graph matching techniques. We derive positive semi-definite kernels which capture the notion of similarity between substructures. Using these as base more sophisticated kernels on protein structures are proposed. To empirically evaluate the kernels we used a 40% sequence non-redundant structures from 15 different SCOP superfamilies. The kernels when used with SVMs show competitive performance with CE, a state of the art structure comparison program.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The covalent linkage between the side-chain and the backbone nitrogen atom of proline leads to the formation of the five-membered pyrrolidine ring and hence restriction of the backbone torsional angle phi to values of -60 degrees +/- 30 degrees for the L-proline. Diproline segments constitute a chain fragment with considerably reduced conformational choices. In the current study, the conformational states for the diproline segment ((L)Pro-(L)Pro) found in proteins has been investigated with an emphasis on the cis and trans states for the Pro-Pro peptide bond. The occurrence of diproline segments in turns and other secondary structures has been studied and compared to that of Xaa-Pro-Yaa segments in proteins which gives us a better understanding on the restriction imposed on other residues by the diproline segment and the single proline residue. The study indicates that P(II)-P(II) and P(II)-alpha are the most favorable conformational states for the diproline segment. The analysis on Xaa-Pro-Yaa sequences reveals that the XaaPro peptide bond exists preferably as the trans conformer rather than the cis conformer. The present study may lead to a better understanding of the behavior of proline occurring in diproline segments which can facilitate various designed diproline-based synthetic templates for biological and structural studies. (C) 2011 Wiley Periodicals, Inc. Biopolymers 97: 54-64, 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A technique is proposed for classifying respiratory volume waveforms(RVW) into normal and abnormal categories of respiratory pathways. The proposed method transforms the temporal sequence into frequency domain by using an orthogonal transform, namely discrete cosine transform (DCT) and the transformed signal is pole-zero modelled. A Bayes classifier using model pole angles as the feature vector performed satisfactorily when a limited number of RVWs recorded under deep and rapid (DR) manoeuvre are classified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Earthquakes cause massive road damage which in turn causes adverse effects on the society. Previous studies have quantified the damage caused to residential and commercial buildings; however, not many studies have been conducted to quantify road damage caused by earthquakes. In this study, an attempt has been made to propose a new scale to classify and quantify the road damage due to earthquakes based on the data collected from major earthquakes in the past. The proposed classification for road damage due to earthquake is called as road damage scale (RDS). Earthquake details such as magnitude, distance of road damage from the epicenter, focal depth, and photographs of damaged roads have been collected from various sources with reported modified Mercalli intensity (MMI). The widely used MMI scale is found to be inadequate to clearly define the road damage. The proposed RDS is applied to various reported road damage and reclassified as per RDS. The correlation between RDS and earthquake parameters of magnitude, epicenter distance, hypocenter distance, and combination of magnitude with epicenter and hypocenter distance has been studied using available data. It is observed that the proposed RDS correlates well with the available earthquake data when compared with the MMI scale. Among several correlations, correlation between RDS and combination of magnitude and epicenter distance is appropriate. Summary of these correlations, their limitations, and the applicability of the proposed scale to forecast road damages and to carry out vulnerability analysis in urban areas is presented in the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The widely used Bayesian classifier is based on the assumption of equal prior probabilities for all the classes. However, inclusion of equal prior probabilities may not guarantee high classification accuracy for the individual classes. Here, we propose a novel technique-Hybrid Bayesian Classifier (HBC)-where the class prior probabilities are determined by unmixing a supplemental low spatial-high spectral resolution multispectral (MS) data that are assigned to every pixel in a high spatial-low spectral resolution MS data in Bayesian classification. This is demonstrated with two separate experiments-first, class abundances are estimated per pixel by unmixing Moderate Resolution Imaging Spectroradiometer data to be used as prior probabilities, while posterior probabilities are determined from the training data obtained from ground. These have been used for classifying the Indian Remote Sensing Satellite LISS-III MS data through Bayesian classifier. In the second experiment, abundances obtained by unmixing Landsat Enhanced Thematic Mapper Plus are used as priors, and posterior probabilities are determined from the ground data to classify IKONOS MS images through Bayesian classifier. The results indicated that HBC systematically exploited the information from two image sources, improving the overall accuracy of LISS-III MS classification by 6% and IKONOS MS classification by 9%. Inclusion of prior probabilities increased the average producer's and user's accuracies by 5.5% and 6.5% in case of LISS-III MS with six classes and 12.5% and 5.4% in IKONOS MS for five classes considered.