28 resultados para data visualization

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a hybrid neural classifier combining the auto-encoder neural network and the Lattice Vector Quantization (LVQ) model is described. The auto-encoder network is used for dimensionality reduction by projecting high dimensional data into the 2D space. The LVQ model is used for data visualization by forming and adapting the granularity of a data map. The mapped data are employed to predict the target classes of new data samples. To improve classification accuracy, a majority voting scheme is adopted by the hybrid classifier. To demonstrate the applicability of the hybrid classifier, a series of experiments using simulated and real fault data from induction motors is conducted. The results show that the hybrid classifier is able to outperform the Multi-Layer Perceptron neural network, and to produce very good classification accuracy rates for various fault conditions of induction motors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the issues associated with pattern classification using data based machine learning systems is the “curse of dimensionality”. In this paper, the circle-segments method is proposed as a feature selection method to identify important input features before the entire data set is provided for learning with machine learning systems. Specifically, four machine learning systems are deployed for classification, viz. Multilayer Perceptron (MLP), Support Vector Machine (SVM), Fuzzy ARTMAP (FAM), and k-Nearest Neighbour (kNN). The integration between the circle-segments method and the machine learning systems has been applied to two case studies comprising one benchmark and one real data sets. Overall, the results after feature selection using the circle segments method demonstrate improvements in performance even with more than 50% of the input features eliminated from the original data sets.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Microarray data classification is one of the most important emerging clinical applications in the medical community. Machine learning algorithms are most frequently used to complete this task. We selected one of the state-of-the-art kernel-based algorithms, the support vector machine (SVM), to classify microarray data. As a large number of kernels are available, a significant research question is what is the best kernel for patient diagnosis based on microarray data classification using SVM? We first suggest three solutions based on data visualization and quantitative measures. Different types of microarray problems then test the proposed solutions. Finally, we found that the rule-based approach is most useful for automatic kernel selection for SVM to classify microarray data.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Equipped with recent advances in electronics and communication, wireless sensor networks gained a rapid development to provide reliable information with higher Quality of Service (QoS) at lower costs. This paper presents a realtime tracking system developed as a part of the ISSNIP BigNet Testbed project. Here a GPS receiver was used to acquire position information of mobile nodes and GSM technology was used as the data communication media. Moreover, Google map based data visualization software was developed to locate the mobile nodes via Internet. This system can be used to accommodate various sensors, such as temperature, pressure, pH etc., and monitor the status of the nodes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, a hybrid intelligent system that integrates the SOM (Self-Organizing Map) neural network, kMER (kernel-based Maximum Entropy learning Rule), and Probabilistic Neural Network (PNN) for data visualization and classification is proposed. The rationales of this Probabilistic SOM-kMER model are explained, and its applicability is demonstrated using two benchmark data sets. The results are analyzed and compared with those from a number of existing methods. Implication of the proposed hybrid system as a useful and usable data visualization and classification tool is discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The emergence of Web 2.0 has brought about new Web applications being developed. Represented chiefly by Web applications such as YouTube, MySpace, blogs and Google applications, these community-based technologies are changing the way we use the Internet. One interesting result of these innovations is the extensibility of these applications. For example, YouTubepsilas content can be displayed on other Websites and hence, are popularly dasiaextendedpsila to be displayed on individual blogs and other organization Websites. In this paper, we discussed two applications that were a result of extending Google Earth and Google Maps. These two applications illustrate how new solutions can be quickly built from these extensible applications thus suggesting the future of application development, one that is built upon applications rather than object-oriented components.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a hybrid system that integrates the SOM (Self Organizing Map) neural network, the kMER (kernel-based Maximum Entropy learning Rule) algorithm and the Probabilistic Neural Network (PNN) for data visualization and classification. The rationales of this hybrid SOM-kMER-PNN model are explained, and the applicability of the proposed model is demonstrated using two benchmark data sets and a real-world application to fault detection and diagnosis. The outcomes show that the hybrid system is able to achieve comparable classification rates when compared to those from a number of existing classifiers and, at the same time, to produce meaningful visualization of the data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a novel architecture for
developing decision support systems. Unlike conventional decision support systems, the proposed architecture endeavors to reveal the decision-making process such that humans' subjectivity can be
incorporated into a computerized system and, at the same time, to
preserve the capability of the computerized system in processing information objectively. A number of techniques used in developing the decision support system are elaborated to make the decisionmarking
process transparent. These include procedures for high dimensional data visualization, pattern classification, prediction, and evolutionary computational search. An artificial data set is first
employed to compare the proposed approach with other methods. A simulated handwritten data set and a real data set on liver disease diagnosis are then employed to evaluate the efficacy of the proposed
approach. The results are analyzed and discussed. The potentials of the proposed architecture as a useful decision support system are demonstrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As the study of cinema has increasingly turned to the examination of economic ebbs and industrial flows, rather than focussing its attention solely on the critical evaluation of the films themselves, new analytic techniques and tools have been adopted (and adapted) by film scholars. Key amongst these is the use of innovative visualization techniques that can assist in the understanding of the spatial and temporal features of film industry practices. However, like the cinema itself, visualization carries its own spatial and temporal dimension. This article explores some of the benefits and limitations that derive from the use of spatial visualization technologies in the field of cinema studies. In particular, this research presents a new holistic multivariate approach to spatio-temporal visualization for point based historical data. This method has been developed through extending the spatial presence in timeline graphics and through meaningful spatial classification and representation.