951 resultados para Array Signal Processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Atrial fibrillation (AF) is a major global health issue as it is the most prevalent sustained supraventricular arrhythmia. Catheter-based ablation of some parts of the atria is considered an effective treatment of AF. The main objective of this research is to analyze atrial intracardiac electrograms (IEGMs) and extract insightful information for the ablation therapy. Throughout this thesis we propose several computationally efficient algorithms that take streams of IEGMs from different atrial sites as the input signals, sequentially analyze them in various domains (e.g., time and frequency), and create color-coded three-dimensional map of the atria to be used in the ablation therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper is based on the novel use of a very high fidelity decimation filter chain for Electrocardiogram (ECG) signal acquisition and data conversion. The multiplier-free and multi-stage structure of the proposed filters lower the power dissipation while minimizing the circuit area which are crucial design constraints to the wireless noninvasive wearable health monitoring products due to the scarce operational resources in their electronic implementation. The decimation ratio of the presented filter is 128, working in tandem with a 1-bit 3rd order Sigma Delta (ΣΔ) modulator which achieves 0.04 dB passband ripples and -74 dB stopband attenuation. The work reported here investigates the non-linear phase effects of the proposed decimation filters on the ECG signal by carrying out a comparative study after phase correction. It concludes that the enhanced phase linearity is not crucial for ECG acquisition and data conversion applications since the signal distortion of the acquired signal, due to phase non-linearity, is insignificant for both original and phase compensated filters. To the best of the authors’ knowledge, being free of signal distortion is essential as this might lead to misdiagnosis as stated in the state of the art. This article demonstrates that with their minimal power consumption and minimal signal distortion features, the proposed decimation filters can effectively be employed in biosignal data processing units.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Theories of sparse signal representation, wherein a signal is decomposed as the sum of a small number of constituent elements, play increasing roles in both mathematical signal processing and neuroscience. This happens despite the differences between signal models in the two domains. After reviewing preliminary material on sparse signal models, I use work on compressed sensing for the electron tomography of biological structures as a target for exploring the efficacy of sparse signal reconstruction in a challenging application domain. My research in this area addresses a topic of keen interest to the biological microscopy community, and has resulted in the development of tomographic reconstruction software which is competitive with the state of the art in its field. Moving from the linear signal domain into the nonlinear dynamics of neural encoding, I explain the sparse coding hypothesis in neuroscience and its relationship with olfaction in locusts. I implement a numerical ODE model of the activity of neural populations responsible for sparse odor coding in locusts as part of a project involving offset spiking in the Kenyon cells. I also explain the validation procedures we have devised to help assess the model's similarity to the biology. The thesis concludes with the development of a new, simplified model of locust olfactory network activity, which seeks with some success to explain statistical properties of the sparse coding processes carried out in the network.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A medição precisa da força é necessária para muitas aplicações, nomeadamente, para a determinação da resistência mecânica dos materiais, controlo de qualidade durante a produção, pesagem e segurança de pessoas. Dada a grande necessidade de medição de forças, têm-se desenvolvido, ao longo do tempo, várias técnicas e instrumentos para esse fim. Entre os vários instrumentos utilizados, destacam-se os sensores de força, também designadas por células de carga, pela sua simplicidade, precisão e versatilidade. O exemplo mais comum é baseado em extensómetros elétricos do tipo resistivo, que aliados a uma estrutura formam uma célula de carga. Este tipo de sensores possui sensibilidades baixas e em repouso, presença de offset diferente de zero, o que torna complexo o seu condicionamento de sinal. Este trabalho apresenta uma solução para o condicionamento e aquisição de dados para células de carga que, tanto quanto foi investigado, é inovador. Este dispositivo permite efetuar o condicionamento de sinal, digitalização e comunicação numa estrutura atómica. A ideia vai de encontro ao paradigma dos sensores inteligentes onde um único dispositivo eletrónico, associado a uma célula de carga, executa um conjunto de operações de processamento de sinal e transmissão de dados. Em particular permite a criação de uma rede ad-hoc utilizando o protocolo de comunicação IIC. O sistema é destinado a ser introduzido numa plataforma de carga, desenvolvida na Escola Superior de Tecnologia e Gestão de Bragança, local destinado à sua implementação. Devido à sua estratégia de conceção para a leitura de forças em três eixos, contém quatro células de carga, com duas saídas cada, totalizando oito saídas. O hardware para condicionamento de sinal já existente é analógico, e necessita de uma placa de dimensões consideráveis por cada saída. Do ponto de vista funcional, apresenta vários problemas, nomeadamente o ajuste de ganho e offset ser feito manualmente, tornando-se essencial um circuito com melhor desempenho no que respeita a lidar com um array de sensores deste tipo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider distributions u is an element of S'(R) of the form u(t) = Sigma(n is an element of N) a(n)e(i lambda nt), where (a(n))(n is an element of N) subset of C and Lambda = (lambda n)(n is an element of N) subset of R have the following properties: (a(n))(n is an element of N) is an element of s', that is, there is a q is an element of N such that (n(-q) a(n))(n is an element of N) is an element of l(1); for the real sequence., there are n(0) is an element of N, C > 0, and alpha > 0 such that n >= n(0) double right arrow vertical bar lambda(n)vertical bar >= Cn(alpha). Let I(epsilon) subset of R be an interval of length epsilon. We prove that for given Lambda, (1) if Lambda = O(n(alpha)) with alpha < 1, then there exists epsilon > 0 such that u vertical bar I(epsilon) = 0 double right arrow u 0; (2) if Lambda = O(n) is uniformly discrete, then there exists epsilon > 0 such that u vertical bar I(epsilon) = 0 double right arrow u 0; (3) if alpha > 1 and. is uniformly discrete, then for all epsilon > 0, u vertical bar I(epsilon) = 0 double right arrow u = 0. Since distributions of the above mentioned form are very common in engineering, as in the case of the modeling of ocean waves, signal processing, and vibrations of beams, plates, and shells, those uniqueness and nonuniqueness results have important consequences for identification problems in the applied sciences. We show an identification method and close this article with a simple example to show that the recovery of geometrical imperfections in a cylindrical shell is possible from a measurement of its dynamics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method to compute three-dimension (3D) left ventricle (LV) motion and its color coded visualization scheme for the qualitative analysis in SPECT images is proposed. It is used to investigate some aspects of Cardiac Resynchronization Therapy (CRT). The method was applied to 3D gated-SPECT images sets from normal subjects and patients with severe Idiopathic Heart Failure, before and after CRT. Color coded visualization maps representing the LV regional motion showed significant difference between patients and normal subjects. Moreover, they indicated a difference between the two groups. Numerical results of regional mean values representing the intensity and direction of movement in radial direction are presented. A difference of one order of magnitude in the intensity of the movement on patients in relation to the normal subjects was observed. Quantitative and qualitative parameters gave good indications of potential application of the technique to diagnosis and follow up of patients submitted to CRT.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new simple method to design linear-phase finite impulse response (FIR) digital filters, based on the steepest-descent optimization method, is presented in this paper. Starting from the specifications of the desired frequency response and a maximum approximation error a nearly optimum digital filter is obtained. Tests have shown that this method is alternative to other traditional ones such as Frequency Sampling and Parks-McClellan, mainly when other than brick wall frequency response is required as a desired frequency response. (C) 2011 Elsevier Inc. All rights reserved.