945 resultados para Social signal processing
Resumo:
Inverse filters are conventionally used for resolving overlapping signals of identical waveshape. However, the inverse filtering approach is shown to be useful for resolving overlapping signals, identical or otherwise, of unknown waveshapes. Digital inverse filter design based on autocorrelation formulation of linear prediction is known to perform optimum spectral flattening of the input signal for which the filter is designed. This property of the inverse filter is used to accomplish composite signal decomposition. The theory has been presented assuming constituent signals to be responses of all-pole filters. However, the approach may be used for a general situation.
Resumo:
A new algorithm based on signal subspace approach is proposed for localizing a sound source in shallow water. In the first instance we assumed an ideal channel with plane parallel boundaries and known reflection properties. The sound source is assumed to emit a broadband stationary stochastic signal. The algorithm takes into account the spatial distribution of all images and reflection characteristics of the sea bottom. It is shown that both range and depth of a source can be measured accurately with the help of a vertical array of sensors. For good results the number of sensors should be greater than the number of significant images; however, localization is possible even with a smaller array but at the cost of higher side lobes. Next, we allowed the channel to be stochastically perturbed; this resulted in random phase errors in the reflection coefficients. The most singular effect of the phase errors is to introduce into the spectral matrix an extra term which may be looked upon as a signal generated coloured noise. It is shown through computer simulations that the signal peak height is reduced considerably as a consequence of random phase errors.
Resumo:
Fractal Dimensions (FD) are one of the popular measures used for characterizing signals. They have been used as complexity measures of signals in various fields including speech and biomedical applications. However, proper interpretation of such analyses has not been thoroughly addressed. In this paper, we study the effect of various signal properties on FD and interpret results in terms of classical signal processing concepts such as amplitude, frequency, number of harmonics, noise power and signal bandwidth. We have used Higuchi's method for estimating FDs. This study may help in gaining a better understanding of the FD complexity measure itself, and for interpreting changing structural complexity of signals in terms of FD. Our results indicate that FD is a useful measure in quantifying structural changes in signal properties.
Resumo:
We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.
Resumo:
It is possible to sample signals at sub-Nyquist rate and still be able to reconstruct them with reasonable accuracy provided they exhibit local Fourier sparsity. Underdetermined systems of equations, which arise out of undersampling, have been solved to yield sparse solutions using compressed sensing algorithms. In this paper, we propose a framework for real time sampling of multiple analog channels with a single A/D converter achieving higher effective sampling rate. Signal reconstruction from noisy measurements on two different synthetic signals has been presented. A scheme of implementing the algorithm in hardware has also been suggested.
Resumo:
We introduce a novel temporal feature of a signal, namely extrema-based signal track length (ESTL) for the problem of speech segmentation. We show that ESTL measure is sensitive to both amplitude and frequency of the signal. The short-time ESTL (ST_ESTL) shows a promising way to capture the significant segments of speech signal, where the segments correspond to acoustic units of speech having distinct temporal waveforms. We compare ESTL based segmentation with ML and STM methods and find that it is as good as spectral feature based segmentation, but with lesser computational complexity.
Resumo:
In this paper, we develop a low-complexity message passing algorithm for joint support and signal recovery of approximately sparse signals. The problem of recovery of strictly sparse signals from noisy measurements can be viewed as a problem of recovery of approximately sparse signals from noiseless measurements, making the approach applicable to strictly sparse signal recovery from noisy measurements. The support recovery embedded in the approach makes it suitable for recovery of signals with same sparsity profiles, as in the problem of multiple measurement vectors (MMV). Simulation results show that the proposed algorithm, termed as JSSR-MP (joint support and signal recovery via message passing) algorithm, achieves performance comparable to that of sparse Bayesian learning (M-SBL) algorithm in the literature, at one order less complexity compared to the M-SBL algorithm.
Resumo:
This paper presents a method of designing a programmable signal processor based on a bit parallel matrix vector matrix multiplier (linear transformer). The salient feature of this design is that the efficiency of the direct vector matrix multiplier is improved and VLSI design is made much simpler by trading off the more expensive arithematic operation (multiplication) for 'cheaper' manipulation (addition/subtraction) of the data.
Resumo:
The design and operation of the minimum cost classifier, where the total cost is the sum of the measurement cost and the classification cost, is computationally complex. Noting the difficulties associated with this approach, decision tree design directly from a set of labelled samples is proposed in this paper. The feature space is first partitioned to transform the problem to one of discrete features. The resulting problem is solved by a dynamic programming algorithm over an explicitly ordered state space of all outcomes of all feature subsets. The solution procedure is very general and is applicable to any minimum cost pattern classification problem in which each feature has a finite number of outcomes. These techniques are applied to (i) voiced, unvoiced, and silence classification of speech, and (ii) spoken vowel recognition. The resulting decision trees are operationally very efficient and yield attractive classification accuracies.
Resumo:
The coding gain in subband coding, a popular technique for achieving signal compression, depends on how the input signal spectrum is decomposed into subbands. The optimality of such decomposition is conventionally addressed by designing appropriate filter banks. The issue of optimal decomposition of the input spectrum is addressed by choosing the set of band that, for a given number of bands, will achieve maximum coding gain. A set of necessary conditions for such optimality is derived, and an algorithm to determine the optimal band edges is then proposed. These band edges along with ideal filters, achieve the upper bound of coding gain for a given number of bands. It is shown that with ideal filters, as well as with realizable filters for some given effective length, such a decomposition system performs better than the conventional nonuniform binary tree-structured decomposition in some cases for AR sources as well as images
Resumo:
This paper presents the design and performance analysis of a detector based on suprathreshold stochastic resonance (SSR) for the detection of deterministic signals in heavy-tailed non-Gaussian noise. The detector consists of a matched filter preceded by an SSR system which acts as a preprocessor. The SSR system is composed of an array of 2-level quantizers with independent and identically distributed (i.i.d) noise added to the input of each quantizer. The standard deviation sigma of quantizer noise is chosen to maximize the detection probability for a given false alarm probability. In the case of a weak signal, the optimum sigma also minimizes the mean-square difference between the output of the quantizer array and the output of the nonlinear transformation of the locally optimum detector. The optimum sigma depends only on the probability density functions (pdfs) of input noise and quantizer noise for weak signals, and also on the signal amplitude and the false alarm probability for non-weak signals. Improvement in detector performance stems primarily from quantization and to a lesser extent from the optimization of quantizer noise. For most input noise pdfs, the performance of the SSR detector is very close to that of the optimum detector. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Gabor's analytic signal (AS) is a unique complex signal corresponding to a real signal, but in general, it admits infinitely-many combinations of amplitude and frequency modulations (AM and FM, respectively). The standard approach is to enforce a non-negativity constraint on the AM, but this results in discontinuities in the corresponding phase modulation (PM), and hence, an FM with discontinuities particularly when the underlying AM-FM signal is over-modulated. In this letter, we analyze the phase discontinuities and propose a technique to compute smooth AM and FM from the AS, by relaxing the non-negativity constraint on the AM. The proposed technique is effective at handling over-modulated signals. We present simulation results to support the theoretical calculations.
Resumo:
The classical approach to A/D conversion has been uniform sampling and we get perfect reconstruction for bandlimited signals by satisfying the Nyquist Sampling Theorem. We propose a non-uniform sampling scheme based on level crossing (LC) time information. We show stable reconstruction of bandpass signals with correct scale factor and hence a unique reconstruction from only the non-uniform time information. For reconstruction from the level crossings we make use of the sparse reconstruction based optimization by constraining the bandpass signal to be sparse in its frequency content. While overdetermined system of equations is resorted to in the literature we use an undetermined approach along with sparse reconstruction formulation. We could get a reconstruction SNR > 20dB and perfect support recovery with probability close to 1, in noise-less case and with lower probability in the noisy case. Random picking of LC from different levels over the same limited signal duration and for the same length of information, is seen to be advantageous for reconstruction.
Resumo:
Context-aware computing is useful in providing individualized services focusing mainly on acquiring surrounding context of user. By comparison, only very little research has been completed in integrating context from different environments, despite of its usefulness in diverse applications such as healthcare, M-commerce and tourist guide applications. In particular, one of the most important criteria in providing personalized service in a highly dynamic environment and constantly changing user environment, is to develop a context model which aggregates context from different domains to infer context of an entity at the more abstract level. Hence, the purpose of this paper is to propose a context model based on cognitive aspects to relate contextual information that better captures the observation of certain worlds of interest for a more sophisticated context-aware service. We developed a C-IOB (Context-Information, Observation, Belief) conceptual model to analyze the context data from physical, system, application, and social domains to infer context at the more abstract level. The beliefs developed about an entity (person, place, things) are primitive in most theories of decision making so that applications can use these beliefs in addition to history of transaction for providing intelligent service. We enhance our proposed context model by further classifying context information into three categories: a well-defined, a qualitative and credible context information to make the system more realistic towards real world implementation. The proposed model is deployed to assist a M-commerce application. The simulation results show that the service selection and service delivery of the system are high compared to traditional system.