938 resultados para SOURCE SEPARATION
Resumo:
The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.
Resumo:
This work addresses the joint compensation of IQimbalances and carrier phase synchronization errors of zero- IF receivers. The compensation scheme based on blind-source separation which provides simple yet potent means to jointly compensate for these errors independent of modulation format and constellation size used. The low-complexity of the algorithm makes it a suitable option for real-time deployment as well as practical for integration into monolithic receiver designs.
Resumo:
An adaptive self-calibrating image rejection receiver is described, containing a modified Weaver image rejection mixer and a Digital Image Rejection Processor (DIRP). The blind source-separation-based DIRP eliminates the I/Q errors improving the Image Rejection Ratio (IRR) without the need for trimming or use of power-hungry discrete components. Hardware complexity is minimal, requiring only two complex coefficients; hence it can be easily integrated into the signal processing path of any receiver. Simulation results show that the proposed approach achieves 75-97 dB of IRR.
Resumo:
In this paper, we carry out a detailed performance analysis of the blind source separation based I/Q corrector operating at the baseband. Performance of the digital I/Q corrector is evaluated not only under time-varying phase and gain errors but also in the presence of multipath and Rayleigh fading channels. Performance under low-SNR and different modulation formats and constellation sizes is also evaluated. What is more, BER improvement after correction is illustrated. The results indicate that the adaptive algorithm offers adequate performance for most communication applications hence, reducing the matching requirements of the analog front-end enabling higher levels of integration.
Resumo:
The I/Q mismatches in quadrature radio receivers results in finite and usually insufficient image rejection, degrading the performance greatly. In this paper we present a detailed analysis of the Blind-Source Separation (BSS) based mismatch corrector in terms of its structure, convergence and performance. The results indicate that the mismatch can be effectively compensated during the normal operation as well as in the rapidly changing environments. Since the compensation is carried out before any modulation specific processing, the proposed method works with all standard modulation formats and is amenable to low-power implementations.
Resumo:
In this paper digital part of a self-calibrating quadrature-receiver is described, containing a digital calibration-engine. The blind source-separation-based calibration-engine eliminates the RF-impairments in real-time hence improving the receiver's performance without the need for test/pilot tones, trimming or use of power-hungry discrete components. Furthermore, an efficient time-multiplexed calibration-engine architecture is proposed and implemented on an FPGA utilising a reduced-range multiplier structure. The use of reduced-range multipliers results in substantial reduction of area as well as power consumption without a compromise in performance when compared with an efficiently designed general purpose multiplier. The performance of the calibration-engine does not depend on the modulation format or the constellation size of the received signal; hence it can be easily integrated into the digital signal processing paths of any receiver.
Resumo:
This paper deals with and details the design and implementation of a low-power; hardware-efficient adaptive self-calibrating image rejection receiver based on blind-source-separation that alleviates the RF analog front-end impairments. Hybrid strength-reduced and re-scheduled data-flow, low-power implementation of the adaptive self-calibration algorithm is developed and its efficiency is demonstrated through simulation case studies. A behavioral and structural model is developed in Matlab as well as a low-level architectural design in VHDL providing valuable test benches for the performance measures undertaken on the detailed algorithms and structures.
Resumo:
This paper deals with and details the design of a power-aware adaptive digital image rejection receiver based on blind-source-separation that alleviates the RF analog front-end impairments. Power-aware system design at the RTL level without having to redesign arithmetic circuits is used to reduce the power consumption in nomadic devices. Power-aware multipliers with configurable precision are used to trade-off the image-rejection-ratio (IRR) performance with power consumption. Results of the simulation case studies demonstrate that the IRR performance of the power-aware system is comparable to that of the normal implementation albeit degraded slightly, but well within the acceptable limits.
Resumo:
L'analyse en composantes indépendantes (ACI) est une méthode d'analyse statistique qui consiste à exprimer les données observées (mélanges de sources) en une transformation linéaire de variables latentes (sources) supposées non gaussiennes et mutuellement indépendantes. Dans certaines applications, on suppose que les mélanges de sources peuvent être groupés de façon à ce que ceux appartenant au même groupe soient fonction des mêmes sources. Ceci implique que les coefficients de chacune des colonnes de la matrice de mélange peuvent être regroupés selon ces mêmes groupes et que tous les coefficients de certains de ces groupes soient nuls. En d'autres mots, on suppose que la matrice de mélange est éparse par groupe. Cette hypothèse facilite l'interprétation et améliore la précision du modèle d'ACI. Dans cette optique, nous proposons de résoudre le problème d'ACI avec une matrice de mélange éparse par groupe à l'aide d'une méthode basée sur le LASSO par groupe adaptatif, lequel pénalise la norme 1 des groupes de coefficients avec des poids adaptatifs. Dans ce mémoire, nous soulignons l'utilité de notre méthode lors d'applications en imagerie cérébrale, plus précisément en imagerie par résonance magnétique. Lors de simulations, nous illustrons par un exemple l'efficacité de notre méthode à réduire vers zéro les groupes de coefficients non-significatifs au sein de la matrice de mélange. Nous montrons aussi que la précision de la méthode proposée est supérieure à celle de l'estimateur du maximum de la vraisemblance pénalisée par le LASSO adaptatif dans le cas où la matrice de mélange est éparse par groupe.
Resumo:
Cette thèse étudie des modèles de séquences de haute dimension basés sur des réseaux de neurones récurrents (RNN) et leur application à la musique et à la parole. Bien qu'en principe les RNN puissent représenter les dépendances à long terme et la dynamique temporelle complexe propres aux séquences d'intérêt comme la vidéo, l'audio et la langue naturelle, ceux-ci n'ont pas été utilisés à leur plein potentiel depuis leur introduction par Rumelhart et al. (1986a) en raison de la difficulté de les entraîner efficacement par descente de gradient. Récemment, l'application fructueuse de l'optimisation Hessian-free et d'autres techniques d'entraînement avancées ont entraîné la recrudescence de leur utilisation dans plusieurs systèmes de l'état de l'art. Le travail de cette thèse prend part à ce développement. L'idée centrale consiste à exploiter la flexibilité des RNN pour apprendre une description probabiliste de séquences de symboles, c'est-à-dire une information de haut niveau associée aux signaux observés, qui en retour pourra servir d'à priori pour améliorer la précision de la recherche d'information. Par exemple, en modélisant l'évolution de groupes de notes dans la musique polyphonique, d'accords dans une progression harmonique, de phonèmes dans un énoncé oral ou encore de sources individuelles dans un mélange audio, nous pouvons améliorer significativement les méthodes de transcription polyphonique, de reconnaissance d'accords, de reconnaissance de la parole et de séparation de sources audio respectivement. L'application pratique de nos modèles à ces tâches est détaillée dans les quatre derniers articles présentés dans cette thèse. Dans le premier article, nous remplaçons la couche de sortie d'un RNN par des machines de Boltzmann restreintes conditionnelles pour décrire des distributions de sortie multimodales beaucoup plus riches. Dans le deuxième article, nous évaluons et proposons des méthodes avancées pour entraîner les RNN. Dans les quatre derniers articles, nous examinons différentes façons de combiner nos modèles symboliques à des réseaux profonds et à la factorisation matricielle non-négative, notamment par des produits d'experts, des architectures entrée/sortie et des cadres génératifs généralisant les modèles de Markov cachés. Nous proposons et analysons également des méthodes d'inférence efficaces pour ces modèles, telles la recherche vorace chronologique, la recherche en faisceau à haute dimension, la recherche en faisceau élagué et la descente de gradient. Finalement, nous abordons les questions de l'étiquette biaisée, du maître imposant, du lissage temporel, de la régularisation et du pré-entraînement.
Resumo:
When the orthogonal space-time block code (STBC), or the Alamouti code, is applied on a multiple-input multiple-output (MIMO) communications system, the optimum reception can be achieved by a simple signal decoupling at the receiver. The performance, however, deteriorates significantly in presence of co-channel interference (CCI) from other users. In this paper, such CCI problem is overcome by applying the independent component analysis (ICA), a blind source separation algorithm. This is based on the fact that, if the transmission data from every transmit antenna are mutually independent, they can be effectively separated at the receiver with the principle of the blind source separation. Then equivalently, the CCI is suppressed. Although they are not required by the ICA algorithm itself, a small number of training data are necessary to eliminate the phase and order ambiguities at the ICA outputs, leading to a semi-blind approach. Numerical simulation is also shown to verify the proposed ICA approach in the multiuser MIMO system.
Resumo:
This paper outlines a method for automatic artefact removal from multichannel recordings of event-related potentials (ERPs). The proposed method is based on, firstly, separation of the ERP recordings into independent components using the method of temporal decorrelation source separation (TDSEP). Secondly, the novel lagged auto-mutual information clustering (LAMIC) algorithm is used to cluster the estimated components, together with ocular reference signals, into clusters corresponding to cerebral and non-cerebral activity. Thirdly, the components in the cluster which contains the ocular reference signals are discarded. The remaining components are then recombined to reconstruct the clean ERPs.
Resumo:
Contamination of the electroencephalogram (EEG) by artifacts greatly reduces the quality of the recorded signals. There is a need for automated artifact removal methods. However, such methods are rarely evaluated against one another via rigorous criteria, with results often presented based upon visual inspection alone. This work presents a comparative study of automatic methods for removing blink, electrocardiographic, and electromyographic artifacts from the EEG. Three methods are considered; wavelet, blind source separation (BSS), and multivariate singular spectrum analysis (MSSA)-based correction. These are applied to data sets containing mixtures of artifacts. Metrics are devised to measure the performance of each method. The BSS method is seen to be the best approach for artifacts of high signal to noise ratio (SNR). By contrast, MSSA performs well at low SNRs but at the expense of a large number of false positive corrections.
Resumo:
In Borlänge, source separation has been the basis for management of household waste for over five years. This report reviews today?s system and gives a model for further follow-up through waste grouping. In the basic system waste is separated into three fractions: biodegradable, waste to energy and waste to landfill. All waste is packed in plastic bags, put in separate containers for each fraction, and collected from the property. Separate analyses were made of waste from single family houses and apartment buildings. The amount of waste per household and week, number of non-sorted bags, purity, recovery rate and density of each fraction was calculated. The amount of packaging collected together with the household waste is given. Material collected under the Swedish law of Producers? Responsibility is not covered in this report.
Resumo:
The exponential growth in the applications of radio frequency (RF) is accompanied by great challenges as more efficient use of spectrum as in the design of new architectures for multi-standard receivers or software defined radio (SDR) . The key challenge in designing architecture of the software defined radio is the implementation of a wide-band receiver, reconfigurable, low cost, low power consumption, higher level of integration and flexibility. As a new solution of SDR design, a direct demodulator architecture, based on fiveport technology, or multi-port demodulator, has been proposed. However, the use of the five-port as a direct-conversion receiver requires an I/Q calibration (or regeneration) procedure in order to generate the in-phase (I) and quadrature (Q) components of the transmitted baseband signal. In this work, we propose to evaluate the performance of a blind calibration technique without additional knowledge about training or pilot sequences of the transmitted signal based on independent component analysis for the regeneration of I/Q five-port downconversion, by exploiting the information on the statistical properties of the three output signals