43 resultados para Clustering analysis

em Deakin Research Online - Australia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Development of polarized immune responses controls resistance and susceptibility to many microorganisms. However, studies of several infectious, allergic, and autoimmune diseases have shown that chronic type-1 and type-2 cytokine responses can also cause significant morbidity and mortality if left unchecked. We used mouse cDNA microarrays to molecularly phenotype the gene expression patterns that characterize two disparate but equally lethal forms of liver pathology that develop in Schistosoma mansoni infected mice polarized for type-1 and type-2 cytokine responses. Hierarchical clustering analysis identified at least three groups of genes associated with a polarized type-2 response and two linked with an extreme type-1 cytokine phenotype. Predictions about liver fibrosis,  apoptosis, and granulocyte recruitment and activation generated by the microarray studies were confirmed later by traditional biological assays. The data show that cDNA microarrays are useful not only for determining  coordinated gene expression profiles but are also highly effective for molecularly “fingerprinting” diseased tissues. Moreover, they illustrate the potential of genome-wide approaches for generating comprehensive views on the molecular and biochemical mechanisms regulating infectious  disease pathogenesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For many clustering algorithms, such as K-Means, EM, and CLOPE, there is usually a requirement to set some parameters. Often, these parameters directly or indirectly control the number of clusters, that is, k, to return. In the presence of different data characteristics and analysis contexts, it is often difficult for the user to estimate the number of clusters in the data set. This is especially true in text collections such as Web documents, images, or biological data. In an effort to improve the effectiveness of clustering, we seek the answer to a fundamental question: How can we effectively estimate the number of clusters in a given data set? We propose an efficient method based on spectra analysis of eigenvalues (not eigenvectors) of the data set as the solution to the above. We first present the relationship between a data set and its underlying spectra with theoretical and experimental results. We then show how our method is capable of suggesting a range of k that is well suited to different analysis contexts. Finally, we conclude with further  empirical results to show how the answer to this fundamental question enhances the clustering process for large text collections.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently, much attention has been given to the mass spectrometry (MS) technology based disease classification, diagnosis, and protein-based biomarker identification. Similar to microarray based investigation, proteomic data generated by such kind of high-throughput experiments are often with high feature-to-sample ratio. Moreover, biological information and pattern are compounded with data noise, redundancy and outliers. Thus, the development of algorithms and procedures for the analysis and interpretation of such kind of data is of paramount importance. In this paper, we propose a hybrid system for analyzing such high dimensional data. The proposed method uses the k-mean clustering algorithm based feature extraction and selection procedure to bridge the filter selection and wrapper selection methods. The potential informative mass/charge (m/z) markers selected by filters are subject to the k-mean clustering algorithm for correlation and redundancy reduction, and a multi-objective Genetic Algorithm selector is then employed to identify discriminative m/z markers generated by k-mean clustering algorithm. Experimental results obtained by using the proposed method indicate that it is suitable for m/z biomarker selection and MS based sample classification.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, an empirical analysis to examine the effects of image segmentation with different colour models using the fuzzy c-means (FCM) clustering algorithm is conducted. A qualitative evaluation method based on human perceptual judgement is used. Two sets of complex images, i.e., outdoor scenes and satellite imagery, are used for demonstration. These images are employed to examine the characteristics of image segmentation using FCM with eight different colour models. The results obtained from the experimental study are compared and analysed. It is found that the CIELAB colour model yields the best outcomes in colour image segmentation with FCM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Failure mode and effect analysis (FMEA) is a popular safety and reliability analysis tool in examining potential failures of products, process, designs, or services, in a wide range of industries. While FMEA is a popular tool, the limitations of the traditional Risk Priority Number (RPN) model in FMEA have been highlighted in the literature. Even though many alternatives to the traditional RPN model have been proposed, there are not many investigations on the use of clustering techniques in FMEA. The main aim of this paper was to examine the use of a new Euclidean distance-based similarity measure and an incremental-learning clustering model, i.e., fuzzy adaptive resonance theory neural network, for similarity analysis and clustering of failure modes in FMEA; therefore, allowing the failure modes to be analyzed, visualized, and clustered. In this paper, the concept of a risk interval encompassing a group of failure modes is investigated. Besides that, a new approach to analyze risk ordering of different failure groups is introduced. These proposed methods are evaluated using a case study related to the edible bird nest industry in Sarawak, Malaysia. In short, the contributions of this paper are threefold: (1) a new Euclidean distance-based similarity measure, (2) a new risk interval measure for a group of failure modes, and (3) a new analysis of risk ordering of different failure groups. © 2014 The Natural Computing Applications Forum.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Spike sorting plays an important role in analysing electrophysiological data and understanding neural functions. Developing spike sorting methods that are highly accurate and computationally inexpensive is always a challenge in the biomedical engineering practice. This paper proposes an automatic unsupervised spike sorting method using the landmark-based spectral clustering (LSC) method in connection with features extracted by the locality preserving projection (LPP) technique. Gap statistics is employed to evaluate the number of clusters before the LSC can be performed. Experimental results show that LPP spike features are more discriminative than those of the popular wavelet transformation (WT). Accordingly, the proposed method LPP-LSC demonstrates a significant dominance compared to the existing method that is the combination between WT feature extraction and the superparamagnetic clustering. LPP and LSC are both linear algorithms that help reduce computational burden and thus their combination can be applied into realtime spike analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract This paper introduces a novel approach for discrete event simulation output analysis. The approach combines dynamic time warping and clustering to enable the identification of system behaviours contributing to overall system performance, by linking the clustering cases to specific causal events within the system. Simulation model event logs have been analysed to group entity flows based on the path taken and travel time through the system. The proposed approach is investigated for a discrete event simulation of an international airport baggage handling system. Results show that the method is able to automatically identify key factors that influence the overall dwell time of system entities, such as bags that fail primary screening. The novel analysis methodology provides insight into system performance, beyond that achievable through traditional analysis techniques. This technique also has potential application to agent-based modelling paradigms and also business event logs traditionally studied using process mining techniques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cluster analysis has been identified as a core task in data mining. What constitutes a cluster, or a good clustering, may depend on the background of researchers and applications. This paper proposes two optimization criteria of abstract degree and fidelity in the field of image abstract. To satisfy the fidelity criteria, a novel clustering algorithm named Global Optimized Color-based DBSCAN Clustering (GOC-DBSCAN) is provided. Also, non-optimized local color information based version of GOC-DBSCAN, called HSV-DBSCAN, is given. Both of them are based on HSV color space. Clusters of GOC-DBSCAN are analyzed to find the factors that impact on the performance of both abstract degree and fidelity. Examples show generally the greater the abstract degree is, the less is the fidelity. It also shows GOC-DBSCAN outperforms HSV-DBSCAN when they are evaluated by the two optimization criteria.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Failure mode and effect analysis (FMEA) is a popular safety and reliability analysis tool in examining potential failures of products, process, designs, or services, in a wide range of industries. While FMEA is a popular tool, the limitations of the traditional Risk Priority Number (RPN) model in FMEA have been highlighted in the literature. Even though many alternatives to the traditional RPN model have been proposed, there are not many investigations on the use of clustering techniques in FMEA. The main aim of this paper was to examine the use of a new Euclidean distance-based similarity measure and an incremental-learning clustering model, i.e., fuzzy adaptive resonance theory neural network, for similarity analysis and clustering of failure modes in FMEA; therefore, allowing the failure modes to be analyzed, visualized, and clustered. In this paper, the concept of a risk interval encompassing a group of failure modes is investigated. Besides that, a new approach to analyze risk ordering of different failure groups is introduced. These proposed methods are evaluated using a case study related to the edible bird nest industry in Sarawak, Malaysia. In short, the contributions of this paper are threefold: (1) a new Euclidean distance-based similarity measure, (2) a new risk interval measure for a group of failure modes, and (3) a new analysis of risk ordering of different failure groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clustering is a difficult problem especially when we consider the task in the context of a data stream of categorical attributes. In this paper, we propose SCLOPE, a novel algorithm based on CLOPErsquos intuitive observation about cluster histograms. Unlike CLOPE however, our algo- rithm is very fast and operates within the constraints of a data stream environment. In particular, we designed SCLOPE according to the recent CluStream framework. Our evaluation of SCLOPE shows very promising results. It consistently outperforms CLOPE in speed and scalability tests on our data sets while maintaining high cluster purity; it also supports cluster analysis that other algorithms in its class do not.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid increase of web complexity and size makes web searched results far from satisfaction in many cases due to a huge amount of information returned by search engines. How to find intrinsic relationships among the web pages at a higher level to implement efficient web searched information management and retrieval is becoming a challenge problem. In this paper, we propose an approach to measure web page similarity. This approach takes hyperlink transitivity and page importance into consideration. From this new similarity measurement, an effective hierarchical web page clustering algorithm is proposed. The primary evaluations show the effectiveness of the new similarity measurement and the improvement of web page clustering. The proposed page similarity, as well as the matrix-based hyperlink analysis methods, could be applied to other web-based research areas..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Yanchun Zhang and his co-authors explain how to construct and analyse Web communities based on information like Web document contents, hyperlinks, or user access logs. Their approaches combine results from Web search algorithms, Web clustering methods, and Web usage mining. They also detail the necessary preliminaries needed to understand the algorithms presented, and they discuss several successful existing applications. Researchers and students in information retrieval and Web search find in this all the necessary basics and methods to create and understand Web communities. Professionals developing Web applications will additionally benefit from the samples presented for their own designs and implementations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For many clustering algorithms, such as k-means, EM, and CLOPE, there is usually a requirement to set some parameters. Often, these parameters directly or indirectly control the number of clusters to return. In the presence of different data characteristics and analysis contexts, it is often difficult for the user to estimate the number of clusters in the data set. This is especially true in text collections such as Web documents, images or biological data. The fundamental question this paper addresses is: ldquoHow can we effectively estimate the natural number of clusters in a given text collection?rdquo. We propose to use spectral analysis, which analyzes the eigenvalues (not eigenvectors) of the collection, as the solution to the above. We first present the relationship between a text collection and its underlying spectra. We then show how the answer to this question enhances the clustering process. Finally, we conclude with empirical results and related work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Fall risk screening tools are frequently used as a part of falls prevention programs in hospitals. Design-related bias in evaluations of tool predictive accuracy could lead to overoptimistic results, which would then contribute to program failure in practice.

Methods:
A systematic review was undertaken. Two blind reviewers assessed the methodology of relevant publications into a four-point classification system adapted from multiple sources. The association between study design classification and reported results was examined using linear regression with clustering based on screening tool and robust variance estimates with point estimates of Youden Index (= sensitivity + specificity - 1) as the dependent variable. Meta-analysis was then performed pooling data from prospective studies.

Results: Thirty-five publications met inclusion criteria, containing 51 evaluations of fall risk screening tools. Twenty evaluations were classified as retrospective validation evaluations, 11 as prospective (temporal) validation evaluations, and 20 as prospective (external) validation evaluations. Retrospective evaluations had significantly higher Youden Indices (point estimate [95% confidence interval]: 0.22 [0.11, 0.33]). Pooled Youden Indices from prospective evaluations demonstrated the STRATIFY, Morse Falls Scale, and nursing staff clinical judgment to have comparable accuracy.

Discussion: Practitioners should exercise caution in comparing validity of fall risk assessment tools where the evaluation has been limited to retrospective classifications of methodology. Heterogeneity between studies indicates that the Morse Falls Scale and STRATIFY may still be useful in particular settings, but that widespread adoption of either is unlikely to generate benefits significantly greater than that of nursing staff clinical judgment.