935 resultados para Nonnegative blind source separation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider blind signal detection in an asynchronous code-division multiple-access (CDMA) system employing short spreading sequences in the presence of unknown multipath fading. This approach is capable of countering the presence of multiple-access interference (MAI) in CDMA fading channels. The proposed blind multiuser detector is based on an independent component analysis (ICA) to mitigate both MAI and noise. This algorithm has been utilised in blind source separation (BSS) of unknown sources from their mixtures. It can also be used for estimating the basis vectors of BSS. The aim is to include an ICA algorithm within a wireless receiver in order to reduce the level of interference in wideband systems. This blind multiuser detector requires no training sequence compared with the conventional multiuser detection receiver. The proposed ICA blind multiuser detector is made robust with respect to knowledge of signature waveforms and the timing of the user of interest. Several experiments are performed in order to verify the validity of the proposed ICA algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a novel electroencephalographic application of a recently developed cerebral source extraction method (Functional Source Separation, FSS), which starts from extracranial signals and adds a functional constraint to the cost function of a basic independent component analysis model without requiring solutions to be independent. Five ad-hoc functional constraints were used to extract the activity reflecting the temporal sequence of sensory information processing along the somatosensory pathway in response to the separate left and right median nerve galvanic stimulation. Constraints required only the maximization of the responsiveness at specific latencies following sensory stimulation, without taking into account that any frequency or spatial information. After source extraction, the reliability of identified FS was assessed based on the position of single dipoles fitted on its retroprojected signals and on a discrepancy measure. The FS positions were consistent with previously reported data (two early subcortical sources localized in the brain stem and thalamus, the three later sources in cortical areas), leaving negligible residual activity at the corresponding latencies. The high-frequency component of the oscillatory activity (HFO) of the extracted component was analyzed. The integrity of the low amplitude HFOs was preserved for each FS. On the basis of our data, we suggest that FSS can be an effective tool to investigate the HFO behavior of the different neuronal pools, recruited at successive times after median nerve galvanic stimulation. As FSs are reconstructed along the entire experimental session, directional and dynamic HFO synchronization phenomena can be studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rhythm created by spacing a series of brief tones in a regular pattern can be disguised by interleaving identical distractors at irregular intervals. The disguised rhythm can be unmasked if the distractors are allocated to a separate stream from the rhythm by integration with temporally overlapping captors. Listeners identified which of 2 rhythms was presented, and the accuracy and rated clarity of their judgment was used to estimate the fusion of the distractors and captors. The extent of fusion depended primarily on onset asynchrony and degree of temporal overlap. Harmonic relations had some influence, but only an extreme difference in spatial location was effective (dichotic presentation). Both preattentive and attentionally driven processes governed performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Recent morpho-functional evidence pointed out that abnormalities in the thalamus could play a major role in the expression of migraine neurophysiological and clinical correlates. Whether this phenomenon is primary or secondary to its functional disconnection from the brainstem remains to be determined. We used a Functional Source Separation algorithm of EEG signal to extract the activity of the different neuronal pools recruited at different latencies along the somatosensory pathway in interictal migraine without aura (MO) patients. Methods: Twenty MO patients and 20 healthy volunteers (HV) underwent EEG recording. Four ad-hoc functional constraints, two sub-cortical (FS14 at brainstem and FS16 at thalamic level) and two cortical (FS20 radial and FS22 tangential parietal sources), were used to extract the activity of successive stages of somatosensory information processing in response to the separate left and right median nerve electric stimulation. A band-pass digital filter (450-750 Hz) was applied offline in order to extract high-frequency oscillatory (HFO) activity from the broadband EEG signal. Results: In both stimulated sides, significant reduced sub-cortical brainstem (FS14) and thalamic (FS16) HFO activations characterized MO patients when compared with HV. No difference emerged in the two cortical HFO activations between the two groups. Conclusions: Present results are the first neurophysiological evidence supporting the hypothesis that a functional disconnection of the thalamus from the subcortical monoaminergic system may underline the interictal cortical abnormal information processing in migraine. Further studies are needed to investigate the precise directional connectivity across the entire primary subcortical and cortical somatosensory pathway in interictal MO. Written informed consent to publication was obtained from the patient(s).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Functional MRI (fMRI) data often have low signal-to-noise-ratio (SNR) and are contaminated by strong interference from other physiological sources. A promising tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). BSS is based on the assumption that the detected signals are a mixture of a number of independent source signals that are linearly combined via an unknown mixing matrix. BSS seeks to determine the mixing matrix to recover the source signals based on principles of statistical independence. In most cases, extraction of all sources is unnecessary; instead, a priori information can be applied to extract only the signal of interest. Herein we propose an algorithm based on a variation of ICA, called Dependent Component Analysis (DCA), where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We applied such method to inspect functional Magnetic Resonance Imaging (fMRI) data, aiming to find the hemodynamic response that follows neuronal activation from an auditory stimulation, in human subjects. The method localized a significant signal modulation in cortical regions corresponding to the primary auditory cortex. The results obtained by DCA were also compared to those of the General Linear Model (GLM), which is the most widely used method to analyze fMRI datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods for the extraction of features from physiological datasets are growing needs as clinical investigations of Alzheimer’s disease (AD) in large and heterogeneous population increase. General tools allowing diagnostic regardless of recording sites, such as different hospitals, are essential and if combined to inexpensive non-invasive methods could critically improve mass screening of subjects with AD. In this study, we applied three state of the art multiway array decomposition (MAD) methods to extract features from electroencephalograms (EEGs) of AD patients obtained from multiple sites. In comparison to MAD, spectral-spatial average filter (SSFs) of control and AD subjects were used as well as a common blind source separation method, algorithm for multiple unknown signal extraction (AMUSE). We trained a feed-forward multilayer perceptron (MLP) to validate and optimize AD classification from two independent databases. Using a third EEG dataset, we demonstrated that features extracted from MAD outperformed features obtained from SSFs AMUSE in terms of root mean squared error (RMSE) and reaching up to 100% of accuracy in test condition. We propose that MAD maybe a useful tool to extract features for AD diagnosis offering great generalization across multi-site databases and opening doors to the discovery of new characterization of the disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the goodness of the Gaussian assumption when designing second-order blind estimationmethods in the context of digital communications. The low- andhigh-signal-to-noise ratio (SNR) asymptotic performance of the maximum likelihood estimator—derived assuming Gaussiantransmitted symbols—is compared with the performance of the optimal second-order estimator, which exploits the actualdistribution of the discrete constellation. The asymptotic study concludes that the Gaussian assumption leads to the optimalsecond-order solution if the SNR is very low or if the symbols belong to a multilevel constellation such as quadrature-amplitudemodulation (QAM) or amplitude-phase-shift keying (APSK). On the other hand, the Gaussian assumption can yield importantlosses at high SNR if the transmitted symbols are drawn from a constant modulus constellation such as phase-shift keying (PSK)or continuous-phase modulations (CPM). These conclusions are illustrated for the problem of direction-of-arrival (DOA) estimation of multiple digitally-modulated signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'analyse en composantes indépendantes (ACI) est une méthode d'analyse statistique qui consiste à exprimer les données observées (mélanges de sources) en une transformation linéaire de variables latentes (sources) supposées non gaussiennes et mutuellement indépendantes. Dans certaines applications, on suppose que les mélanges de sources peuvent être groupés de façon à ce que ceux appartenant au même groupe soient fonction des mêmes sources. Ceci implique que les coefficients de chacune des colonnes de la matrice de mélange peuvent être regroupés selon ces mêmes groupes et que tous les coefficients de certains de ces groupes soient nuls. En d'autres mots, on suppose que la matrice de mélange est éparse par groupe. Cette hypothèse facilite l'interprétation et améliore la précision du modèle d'ACI. Dans cette optique, nous proposons de résoudre le problème d'ACI avec une matrice de mélange éparse par groupe à l'aide d'une méthode basée sur le LASSO par groupe adaptatif, lequel pénalise la norme 1 des groupes de coefficients avec des poids adaptatifs. Dans ce mémoire, nous soulignons l'utilité de notre méthode lors d'applications en imagerie cérébrale, plus précisément en imagerie par résonance magnétique. Lors de simulations, nous illustrons par un exemple l'efficacité de notre méthode à réduire vers zéro les groupes de coefficients non-significatifs au sein de la matrice de mélange. Nous montrons aussi que la précision de la méthode proposée est supérieure à celle de l'estimateur du maximum de la vraisemblance pénalisée par le LASSO adaptatif dans le cas où la matrice de mélange est éparse par groupe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contamination of the electroencephalogram (EEG) by artifacts greatly reduces the quality of the recorded signals. There is a need for automated artifact removal methods. However, such methods are rarely evaluated against one another via rigorous criteria, with results often presented based upon visual inspection alone. This work presents a comparative study of automatic methods for removing blink, electrocardiographic, and electromyographic artifacts from the EEG. Three methods are considered; wavelet, blind source separation (BSS), and multivariate singular spectrum analysis (MSSA)-based correction. These are applied to data sets containing mixtures of artifacts. Metrics are devised to measure the performance of each method. The BSS method is seen to be the best approach for artifacts of high signal to noise ratio (SNR). By contrast, MSSA performs well at low SNRs but at the expense of a large number of false positive corrections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exponential growth in the applications of radio frequency (RF) is accompanied by great challenges as more efficient use of spectrum as in the design of new architectures for multi-standard receivers or software defined radio (SDR) . The key challenge in designing architecture of the software defined radio is the implementation of a wide-band receiver, reconfigurable, low cost, low power consumption, higher level of integration and flexibility. As a new solution of SDR design, a direct demodulator architecture, based on fiveport technology, or multi-port demodulator, has been proposed. However, the use of the five-port as a direct-conversion receiver requires an I/Q calibration (or regeneration) procedure in order to generate the in-phase (I) and quadrature (Q) components of the transmitted baseband signal. In this work, we propose to evaluate the performance of a blind calibration technique without additional knowledge about training or pilot sequences of the transmitted signal based on independent component analysis for the regeneration of I/Q five-port downconversion, by exploiting the information on the statistical properties of the three output signals

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blind Source Separation (BSS) refers to the problem of estimate original signals from observed linear mixtures with no knowledge about the sources or the mixing process. Independent Component Analysis (ICA) is a technique mainly applied to BSS problem and from the algorithms that implement this technique, FastICA is a high performance iterative algorithm of low computacional cost that uses nongaussianity measures based on high order statistics to estimate the original sources. The great number of applications where ICA has been found useful reects the need of the implementation of this technique in hardware and the natural paralelism of FastICA favors the implementation of this algorithm on digital hardware. This work proposes the implementation of FastICA on a reconfigurable hardware platform for the viability of it's use in blind source separation problems, more specifically in a hardware prototype embedded in a Field Programmable Gate Array (FPGA) board for the monitoring of beds in hospital environments. The implementations will be carried out by Simulink models and it's synthesizing will be done through the DSP Builder software from Altera Corporation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents a new proposal for the Direction of Arrival (DOA) detection problem for more than one signal inciding simultaneously on an antennas array with linear or planar geometry by using intelligent algorithms. The DOA estimator is developed by using techniques of Conventional Beam-forming (CBF), Blind Source Separation (BSS), and the neural estimator MRBF (Modular Structure of Radial Basis Functions). The developed MRBF estimator has its capacity extended due to the interaction with the BSS technique. The BSS makes an estimation of the steering vectors of the multiple plane waves that reach the array in the same frequency, that means, obtains to separate mixed signals without information a priori. The technique developed in this work makes possible to identify the multiple sources directions and to identify and to exclude interference sources

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work considers the development of a filtering system composed of an intelligent algorithm, that separates information and noise coming from sensors interconnected by Foundation Fieldbus (FF) network. The algorithm implementation will be made through FF standard function blocks, with on-line training through OPC (OLE for Process Control), and embedded technology in a DSP (Digital Signal Processor) that interacts with the fieldbus devices. The technique ICA (Independent Component Analysis), that explores the possibility of separating mixed signals based on the fact that they are statistically independent, was chosen to this Blind Source Separation (BSS) process. The algorithm and its implementations will be Presented, as well as the results