965 resultados para Landmark-based spectral clustering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding neural functions requires knowledge from analysing electrophysiological data. The process of assigning spikes of a multichannel signal into clusters, called spike sorting, is one of the important problems in such analysis. There have been various automated spike sorting techniques with both advantages and disadvantages regarding accuracy and computational costs. Therefore, developing spike sorting methods that are highly accurate and computationally inexpensive is always a challenge in the biomedical engineering practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spike sorting plays an important role in analysing electrophysiological data and understanding neural functions. Developing spike sorting methods that are highly accurate and computationally inexpensive is always a challenge in the biomedical engineering practice. This paper proposes an automatic unsupervised spike sorting method using the landmark-based spectral clustering (LSC) method in connection with features extracted by the locality preserving projection (LPP) technique. Gap statistics is employed to evaluate the number of clusters before the LSC can be performed. Experimental results show that LPP spike features are more discriminative than those of the popular wavelet transformation (WT). Accordingly, the proposed method LPP-LSC demonstrates a significant dominance compared to the existing method that is the combination between WT feature extraction and the superparamagnetic clustering. LPP and LSC are both linear algorithms that help reduce computational burden and thus their combination can be applied into realtime spike analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an improved hierarchical clustering algorithm for land cover mapping problem using quasi-random distribution. Initially, Niche Particle Swarm Optimization (NPSO) with pseudo/quasi-random distribution is used for splitting the data into number of cluster centers by satisfying Bayesian Information Criteria (BIC). Themain objective is to search and locate the best possible number of cluster and its centers. NPSO which highly depends on the initial distribution of particles in search space is not been exploited to its full potential. In this study, we have compared more uniformly distributed quasi-random with pseudo-random distribution with NPSO for splitting data set. Here to generate quasi-random distribution, Faure method has been used. Performance of previously proposed methods namely K-means, Mean Shift Clustering (MSC) and NPSO with pseudo-random is compared with the proposed approach - NPSO with quasi distribution(Faure). These algorithms are used on synthetic data set and multi-spectral satellite image (Landsat 7 thematic mapper). From the result obtained we conclude that use of quasi-random sequence with NPSO for hierarchical clustering algorithm results in a more accurate data classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multipoint' kernels, and study their applications. We study a class of kernels based on Jensen type divergences and show that these can be extended to measure similarity among multiple points. We study tensor flattening methods and develop a multi-point (kernel) spectral clustering (MSC) method. We further emphasize on a special case of the proposed kernels, which is a multi-point extension of the linear (dot-product) kernel and show the existence of cubic time tensor flattening algorithm in this case. Finally, we illustrate the usefulness of our contributions using standard data sets and image segmentation tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A spectral angle based feature extraction method, Spectral Clustering Independent Component Analysis (SC-ICA), is proposed in this work to improve the brain tissue classification from Magnetic Resonance Images (MRI). SC-ICA provides equal priority to global and local features; thereby it tries to resolve the inefficiency of conventional approaches in abnormal tissue extraction. First, input multispectral MRI is divided into different clusters by a spectral distance based clustering. Then, Independent Component Analysis (ICA) is applied on the clustered data, in conjunction with Support Vector Machines (SVM) for brain tissue analysis. Normal and abnormal datasets, consisting of real and synthetic T1-weighted, T2-weighted and proton density/fluid-attenuated inversion recovery images, were used to evaluate the performance of the new method. Comparative analysis with ICA based SVM and other conventional classifiers established the stability and efficiency of SC-ICA based classification, especially in reproduction of small abnormalities. Clinical abnormal case analysis demonstrated it through the highest Tanimoto Index/accuracy values, 0.75/98.8%, observed against ICA based SVM results, 0.17/96.1%, for reproduced lesions. Experimental results recommend the proposed method as a promising approach in clinical and pathological studies of brain diseases

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the current clustering algorithms of complex networks, Laplacian-based spectral clustering algorithms have the advantage of rigorous mathematical basis and high accuracy. However, their applications are limited due to their dependence on prior knowledge, such as the number of clusters. For most of application scenarios, it is hard to obtain the number of clusters beforehand. To address this problem, we propose a novel clustering algorithm - Jordan-Form of Laplacian-Matrix based Clustering algorithm (JLMC). In JLMC, we propose a model to calculate the number (n) of clusters in a complex network based on the Jordan-Form of its corresponding Laplacian matrix. JLMC clusters the network into n clusters by using our proposed modularity density function (P function). We conduct extensive experiments over real and synthetic data, and the experimental results reveal that JLMC can accurately obtain the number of clusters in a complex network, and outperforms Fast-Newman algorithm and Girvan-Newman algorithm in terms of clustering accuracy and time complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes the use of the Bayes Factor to replace the Bayesian Information Criterion (BIC) as a criterion for speaker clustering within a speaker diarization system. The BIC is one of the most popular decision criteria used in speaker diarization systems today. However, it will be shown in this paper that the BIC is only an approximation to the Bayes factor of marginal likelihoods of the data given each hypothesis. This paper uses the Bayes factor directly as a decision criterion for speaker clustering, thus removing the error introduced by the BIC approximation. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, leading to a 14.7% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes the use of eigenvoice modeling techniques with the Cross Likelihood Ratio (CLR) as a criterion for speaker clustering within a speaker diarization system. The CLR has previously been shown to be a robust decision criterion for speaker clustering using Gaussian Mixture Models. Recently, eigenvoice modeling techniques have become increasingly popular, due to its ability to adequately represent a speaker based on sparse training data, as well as an improved capture of differences in speaker characteristics. This paper hence proposes that it would be beneficial to capitalize on the advantages of eigenvoice modeling in a CLR framework. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 35.1% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes the use of Bayesian approaches with the cross likelihood ratio (CLR) as a criterion for speaker clustering within a speaker diarization system, using eigenvoice modeling techniques. The CLR has previously been shown to be an effective decision criterion for speaker clustering using Gaussian mixture models. Recently, eigenvoice modeling has become an increasingly popular technique, due to its ability to adequately represent a speaker based on sparse training data, as well as to provide an improved capture of differences in speaker characteristics. The integration of eigenvoice modeling into the CLR framework to capitalize on the advantage of both techniques has also been shown to be beneficial for the speaker clustering task. Building on that success, this paper proposes the use of Bayesian methods to compute the conditional probabilities in computing the CLR, thus effectively combining the eigenvoice-CLR framework with the advantages of a Bayesian approach to the diarization problem. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 33.5% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, elastic wave propagation is studied in a nanocomposite reinforced with multiwall carbon nanotubes (CNTs). Analysis is performed on a representative volume element of square cross section. The frequency content of the exciting signal is at the terahertz level. Here, the composite is modeled as a higher order shear deformable beam using layerwise theory, to account for partial shear stress transfer between the CNTs and the matrix. The walls of the multiwall CNTs are considered to be connected throughout their length by distributed springs, whose stiffness is governed by the van der Waals force acting between the walls of nanotubes. The analyses in both the frequency and time domains are done using the wavelet-based spectral finite element method (WSFEM). The method uses the Daubechies wavelet basis approximation in time to reduce the governing PDE to a set of ODEs. These transformed ODEs are solved using a finite element (FE) technique by deriving an exact interpolating function in the transformed domain to obtain the exact dynamic stiffness matrix. Numerical analyses are performed to study the spectrum and dispersion relations for different matrix materials and also for different beam models. The effects of partial shear stress transfer between CNTs and matrix on the frequency response function (FRF) and the time response due to broadband impulse loading are investigated for different matrix materials. The simultaneous existence of four coupled propagating modes in a double-walled CNT-composite is also captured using modulated sinusoidal excitation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a model for composite beam with embedded de-lamination is developed using the wavelet based spectral finite element (WSFE) method particularly for damage detection using wave propagation analysis. The simulated responses are used as surrogate experimental results for the inverse problem of detection of damage using wavelet filtering. The WSFE technique is very similar to the fast fourier transform (FFT) based spectral finite element (FSFE) except that it uses compactly supported Daubechies scaling function approximation in time. Unlike FSFE formulation with periodicity assumption, the wavelet-based method allows imposition of initial values and thus is free from wrap around problems. This helps in analysis of finite length undamped structures, where the FSFE method fails to simulate accurate response. First, numerical experiments are performed to study the effect of de-lamination on the wave propagation characteristics. The responses are simulated for different de-lamination configurations for both broad-band and narrow-band excitations. Next, simulated responses are used for damage detection using wavelet analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regionalization approaches are widely used in water resources engineering to identify hydrologically homogeneous groups of watersheds that are referred to as regions. Pooled information from sites (depicting watersheds) in a region forms the basis to estimate quantiles associated with hydrological extreme events at ungauged/sparsely gauged sites in the region. Conventional regionalization approaches can be effective when watersheds (data points) corresponding to different regions can be separated using straight lines or linear planes in the space of watershed related attributes. In this paper, a kernel-based Fuzzy c-means (KFCM) clustering approach is presented for use in situations where such linear separation of regions cannot be accomplished. The approach uses kernel-based functions to map the data points from the attribute space to a higher-dimensional space where they can be separated into regions by linear planes. A procedure to determine optimal number of regions with the KFCM approach is suggested. Further, formulations to estimate flood quantiles at ungauged sites with the approach are developed. Effectiveness of the approach is demonstrated through Monte-Carlo simulation experiments and a case study on watersheds in United States. Comparison of results with those based on conventional Fuzzy c-means clustering, Region-of-influence approach and a prior study indicate that KFCM approach outperforms the other approaches in forming regions that are closer to being statistically homogeneous and in estimating flood quantiles at ungauged sites. Key Points Kernel-based regionalization approach is presented for flood frequency analysis Kernel procedure to estimate flood quantiles at ungauged sites is developed A set of fuzzy regions is delineated in Ohio, USA