973 resultados para mutual information


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We agree with Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] and Quian Quiroga [Phys. Rev. E 67, 063902 (2003)] that mutual information (MI) is a useful measure of dependence for electroencephalogram (EEG) data, but we show that the improvement seen in the performance of MI on extracting dependence trends from EEG is more dependent on the type of MI estimator rather than any embedding technique used. In an independent study we conducted in search for an optimal MI estimator, and in particular for EEG applications, we examined the performance of a number of MI estimators on the data set used by Quian Quiroga in their original study, where the performance of different dependence measures on real data was investigated [Phys. Rev. E 65, 041903 (2002)]. We show that for EEG applications the best performance among the investigated estimators is achieved by k-nearest neighbors, which supports the conjecture by Quian Quiroga in Phys. Rev. E 67, 063902 (2003) that the nearest neighbor estimator is the most precise method for estimating MI.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper outlines a method for automatic artefact removal from multichannel recordings of event-related potentials (ERPs). The proposed method is based on, firstly, separation of the ERP recordings into independent components using the method of temporal decorrelation source separation (TDSEP). Secondly, the novel lagged auto-mutual information clustering (LAMIC) algorithm is used to cluster the estimated components, together with ocular reference signals, into clusters corresponding to cerebral and non-cerebral activity. Thirdly, the components in the cluster which contains the ocular reference signals are discarded. The remaining components are then recombined to reconstruct the clean ERPs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, single-carrier multiple-input multiple-output (MIMO) transmit beamforming (TB) systems in the presence of high-power amplifier (HPA) nonlinearity are investigated. Specifically, due to the suboptimality of the conventional maximal ratio transmission/maximal ratio combining (MRT/MRC) under HPA nonlinearity, we propose the optimal TB scheme with the optimal beamforming weight vector and combining vector, for MIMO systems with nonlinear HPAs. Moreover, an alternative suboptimal but much simpler TB scheme, namely, quantized equal gain transmission (QEGT), is proposed. The latter profits from the property that the elements of the beamforming weight vector have the same constant modulus. The performance of the proposed optimal TB scheme and QEGT/MRC technique in the presence of the HPA nonlinearity is evaluated in terms of the average symbol error probability and mutual information with the Gaussian input, considering the transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects on the performance of several system parameters, namely, the HPA parameters, numbers of antennas, quadrature amplitude modulation modulation order, number of pilot symbols, and cardinality of the beamforming weight vector codebook for QEGT.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Data assimilation methods which avoid the assumption of Gaussian error statistics are being developed for geoscience applications. We investigate how the relaxation of the Gaussian assumption affects the impact observations have within the assimilation process. The effect of non-Gaussian observation error (described by the likelihood) is compared to previously published work studying the effect of a non-Gaussian prior. The observation impact is measured in three ways: the sensitivity of the analysis to the observations, the mutual information, and the relative entropy. These three measures have all been studied in the case of Gaussian data assimilation and, in this case, have a known analytical form. It is shown that the analysis sensitivity can also be derived analytically when at least one of the prior or likelihood is Gaussian. This derivation shows an interesting asymmetry in the relationship between analysis sensitivity and analysis error covariance when the two different sources of non-Gaussian structure are considered (likelihood vs. prior). This is illustrated for a simple scalar case and used to infer the effect of the non-Gaussian structure on mutual information and relative entropy, which are more natural choices of metric in non-Gaussian data assimilation. It is concluded that approximating non-Gaussian error distributions as Gaussian can give significantly erroneous estimates of observation impact. The degree of the error depends not only on the nature of the non-Gaussian structure, but also on the metric used to measure the observation impact and the source of the non-Gaussian structure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A fully automated and online artifact removal method for the electroencephalogram (EEG) is developed for use in brain-computer interfacing. The method (FORCe) is based upon a novel combination of wavelet decomposition, independent component analysis, and thresholding. FORCe is able to operate on a small channel set during online EEG acquisition and does not require additional signals (e.g. electrooculogram signals). Evaluation of FORCe is performed offline on EEG recorded from 13 BCI particpants with cerebral palsy (CP) and online with three healthy participants. The method outperforms the state-of the-art automated artifact removal methods Lagged auto-mutual information clustering (LAMIC) and Fully automated statistical thresholding (FASTER), and is able to remove a wide range of artifact types including blink, electromyogram (EMG), and electrooculogram (EOG) artifacts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, noninvasive methods of diagnosis have increased due to demands of the population that requires fast, simple and painless exams. These methods have become possible because of the growth of technology that provides the necessary means of collecting and processing signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this paper is to characterize healthy and pathological voice signals with the aid of relative entropy measures. Phase space reconstruction technique is also used as a way to select interesting regions of the signals. Three groups of samples were used, one from healthy individuals and the other two from people with nodule in the vocal fold and Reinke`s edema. All of them are recordings of sustained vowel /a/ from Brazilian Portuguese. The paper shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Relative entropy is well suited due to its sensibility to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. The results showed that the pathological groups had higher entropy values in accordance with other vocal acoustic parameters presented. This suggests that these techniques may improve and complement the recent voice analysis methods available for clinicians. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Different data classification algorithms have been developed and applied in various areas to analyze and extract valuable information and patterns from large datasets with noise and missing values. However, none of them could consistently perform well over all datasets. To this end, ensemble methods have been suggested as the promising measures. This paper proposes a novel hybrid algorithm, which is the combination of a multi-objective Genetic Algorithm (GA) and an ensemble classifier. While the ensemble classifier, which consists of a decision tree classifier, an Artificial Neural Network (ANN) classifier, and a Support Vector Machine (SVM) classifier, is used as the classification committee, the multi-objective Genetic Algorithm is employed as the feature selector to facilitate the ensemble classifier to improve the overall sample classification accuracy while also identifying the most important features in the dataset of interest. The proposed GA-Ensemble method is tested on three benchmark datasets, and compared with each individual classifier as well as the methods based on mutual information theory, bagging and boosting. The results suggest that this GA-Ensemble method outperform other algorithms in comparison, and be a useful method for classification and feature selection problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multisource image fusion is usually achieved by repeatedly fusing source images in pairs. However, there is no guarantee on the delivered quality considering the amount of information to be squeezed into the same spatial dimension. This paper presents a fusion capacity measure and examines the limits at which fusing more images will not add further information. The fusion capacity index employs Mutual Information (MI) to measure how far the histogram of the examined image is from a uniformly distributed histogram of a saturated image.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Map comparison is a relatively uncommon practice in acoustic seabed classification to date, contrary to the field of land remote sensing, where it has been developed extensively over recent decades. The aim here is to illustrate the benefits of map comparison in the underwater realm with a case study of three maps independently describing the seabed habitats of the Te Matuku Marine Reserve (Hauraki Gulf, New Zealand). The maps are obtained from a QTC View classification of a single-beam echosounder (SBES) dataset, manual segmentation of a sidescan sonar (SSS) mosaic, and automatic classification of a backscatter dataset from a multibeam echosounder (MBES). The maps are compared using pixel-to-pixel similarity measures derived from the literature in land remote sensing. All measures agree in presenting the MBES and SSS maps as the most similar, and the SBES and SSS maps as the least similar. The results are discussed with reference to the potential of MBES backscatter as an alternative to SSS mosaic for imagery segmentation and to the potential of joint SBES–SSS survey for improved habitat mapping. Other applications of map-similarity measures in acoustic classification of the seabed are suggested.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Healthcare plays an important role in promoting the general health and well-being of people around the world. The difficulty in healthcare data classification arises from the uncertainty and the high-dimensional nature of the medical data collected. This paper proposes an integration of fuzzy standard additive model (SAM) with genetic algorithm (GA), called GSAM, to deal with uncertainty and computational challenges. GSAM learning process comprises three continual steps: rule initialization by unsupervised learning using the adaptive vector quantization clustering, evolutionary rule optimization by GA and parameter tuning by the gradient descent supervised learning. Wavelet transformation is employed to extract discriminative features for high-dimensional datasets. GSAM becomes highly capable when deployed with small number of wavelet features as its computational burden is remarkably reduced. The proposed method is evaluated using two frequently-used medical datasets: the Wisconsin breast cancer and Cleveland heart disease from the UCI Repository for machine learning. Experiments are organized with a five-fold cross validation and performance of classification techniques are measured by a number of important metrics: accuracy, F-measure, mutual information and area under the receiver operating characteristic curve. Results demonstrate the superiority of the GSAM compared to other machine learning methods including probabilistic neural network, support vector machine, fuzzy ARTMAP, and adaptive neuro-fuzzy inference system. The proposed approach is thus helpful as a decision support system for medical practitioners in the healthcare practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper introduces a method to classify EEG signals using features extracted by an integration of wavelet transform and the nonparametric Wilcoxon test. Orthogonal Haar wavelet coefficients are ranked based on the Wilcoxon test’s statistics. The most prominent discriminant wavelets are assembled to form a feature set that serves as inputs to the naïve Bayes classifier. Two benchmark datasets, named Ia and Ib, downloaded from the brain–computer interface (BCI) competition II are employed for the experiments. Classification performance is evaluated using accuracy, mutual information, Gini coefficient and F-measure. Widely used classifiers, including feedforward neural network, support vector machine, k-nearest neighbours, ensemble learning Adaboost and adaptive neuro-fuzzy inference system, are also implemented for comparisons. The proposed combination of Haar wavelet features and naïve Bayes classifier considerably dominates the competitive classification approaches and outperforms the best performance on the Ia and Ib datasets reported in the BCI competition II. Application of naïve Bayes also provides a low computational cost approach that promotes the implementation of a potential real-time BCI system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we integrate two blind source separation (BSS) methods to estimate the individual channel state information (CSI) for the source-relay and relay-destination links of three-node two-hop multiple-input multiple-output (MIMO) relay systems. In particular, we propose a first-order Z-domain precoding technique for the blind estimation of the relay-destination channel matrix, while an algorithm based on the constant modulus and mutual information properties is developed to estimate the source-relay channel matrix. Compared with training-based MIMO relay channel estimation approaches, our algorithm has a better bandwidth efficiency as no bandwidth is wasted for sending the training sequences. Numerical examples are shown to demonstrate the performance of the proposed algorithm. © 2014 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper introduces an approach to classify EEG signals using wavelet transform and a fuzzy standard additive model (FSAM) with tabu search learning mechanism. Wavelet coefficients are ranked based on statistics of the Wilcoxon test. The most informative coefficients are assembled to form a feature set that serves as inputs to the tabu-FSAM. Two benchmark datasets, named Ia and Ib, downloaded from the brain-computer interface (BCI) competition II are employed for the experiments. Classification performance is evaluated using accuracy, mutual information, Gini coefficient and F-measure. Widely-used classifiers, including feedforward neural network, support vector machine, k-nearest neighbours, ensemble learning Adaboost and adaptive neuro-fuzzy inference system, are also implemented for comparisons. The proposed tabu-FSAM method considerably dominates the competitive classifiers, and outperforms the best performance on the Ia and Ib datasets reported in the BCI competition II.