945 resultados para Social signal processing
Resumo:
This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.
Resumo:
We address the problem of signal reconstruction from Fourier transform magnitude spectrum. The problem arises in many real-world scenarios where magnitude-only measurements are possible, but it is required to construct a complex-valued signal starting from those measurements. We present some new general results in this context and show that the previously known results on minimum-phase rational transfer functions, and recoverability of minimum-phase functions from magnitude spectrum, form special cases of the results reported in this paper. Some simulation results are also provided to demonstrate the practical feasibility of the reconstruction methodology.
Resumo:
This paper presents the formulation and performance analysis of four techniques for detection of a narrowband acoustic source in a shallow range-independent ocean using an acoustic vector sensor (AVS) array. The array signal vector is not known due to the unknown location of the source. Hence all detectors are based on a generalized likelihood ratio test (GLRT) which involves estimation of the array signal vector. One non-parametric and three parametric (model-based) signal estimators are presented. It is shown that there is a strong correlation between the detector performance and the mean-square signal estimation error. Theoretical expressions for probability of false alarm and probability of detection are derived for all the detectors, and the theoretical predictions are compared with simulation results. It is shown that the detection performance of an AVS array with a certain number of sensors is equal to or slightly better than that of a conventional acoustic pressure sensor array with thrice as many sensors.
Resumo:
The amplitude-modulation (AM) and phase-modulation (PM) of an amplitude-modulated frequency-modulated (AM-FM) signal are defined as the modulus and phase angle, respectively, of the analytic signal (AS). The FM is defined as the derivative of the PM. However, this standard definition results in a PM with jump discontinuities in cases when the AM index exceeds unity, resulting in an FM that contains impulses. We propose a new approach to define smooth AM, PM, and FM for the AS, where the PM is computed as the solution to an optimization problem based on a vector interpretation of the AS. Our approach is directly linked to the fractional Hilbert transform (FrHT) and leads to an eigenvalue problem. The resulting PM and AM are shown to be smooth, and in particular, the AM turns out to be bipolar. We show an equivalence of the eigenvalue formulation to the square of the AS, and arrive at a simple method to compute the smooth PM. Some examples on synthesized and real signals are provided to validate the theoretical calculations.
Resumo:
The analytic signal (AS) was proposed by Gabor as a complex signal corresponding to a given real signal. The AS has a one-sided spectrum and gives rise to meaningful spectral averages. The Hilbert transform (HT) is a key component in Gabor's AS construction. We generalize the construction methodology by employing the fractional Hilbert transform (FrHT), without going through the standard fractional Fourier transform (FrFT) route. We discuss some properties of the fractional Hilbert operator and show how decomposition of the operator in terms of the identity and the standard Hilbert operators enables the construction of a family of analytic signals. We show that these analytic signals also satisfy Bedrosian-type properties and that their time-frequency localization properties are unaltered. We also propose a generalized-phase AS (GPAS) using a generalized-phase Hilbert transform (GPHT). We show that the GPHT shares many properties of the FrHT, in particular, selective highlighting of singularities, and a connection with Lie groups. We also investigate the duality between analyticity and causality concepts to arrive at a representation of causal signals in terms of the FrHT and GPHT. On the application front, we develop a secure multi-key single-sideband (SSB) modulation scheme and analyze its performance in noise and sensitivity to security key perturbations. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Although many sparse recovery algorithms have been proposed recently in compressed sensing (CS), it is well known that the performance of any sparse recovery algorithm depends on many parameters like dimension of the sparse signal, level of sparsity, and measurement noise power. It has been observed that a satisfactory performance of the sparse recovery algorithms requires a minimum number of measurements. This minimum number is different for different algorithms. In many applications, the number of measurements is unlikely to meet this requirement and any scheme to improve performance with fewer measurements is of significant interest in CS. Empirically, it has also been observed that the performance of the sparse recovery algorithms also depends on the underlying statistical distribution of the nonzero elements of the signal, which may not be known a priori in practice. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in these cases does not always imply a complete failure. In this paper, we study this scenario and show that by fusing the estimates of multiple sparse recovery algorithms, which work with different principles, we can improve the sparse signal recovery. We present the theoretical analysis to derive sufficient conditions for performance improvement of the proposed schemes. We demonstrate the advantage of the proposed methods through numerical simulations for both synthetic and real signals.
Resumo:
We address the problem of designing an optimal pointwise shrinkage estimator in the transform domain, based on the minimum probability of error (MPE) criterion. We assume an additive model for the noise corrupting the clean signal. The proposed formulation is general in the sense that it can handle various noise distributions. We consider various noise distributions (Gaussian, Student's-t, and Laplacian) and compare the denoising performance of the estimator obtained with the mean-squared error (MSE)-based estimators. The MSE optimization is carried out using an unbiased estimator of the MSE, namely Stein's Unbiased Risk Estimate (SURE). Experimental results show that the MPE estimator outperforms the SURE estimator in terms of SNR of the denoised output, for low (0 -10 dB) and medium values (10 - 20 dB) of the input SNR.
Resumo:
It has been shown that iterative re-weighted strategies will often improve the performance of many sparse reconstruction algorithms. However, these strategies are algorithm dependent and cannot be easily extended for an arbitrary sparse reconstruction algorithm. In this paper, we propose a general iterative framework and a novel algorithm which iteratively enhance the performance of any given arbitrary sparse reconstruction algorithm. We theoretically analyze the proposed method using restricted isometry property and derive sufficient conditions for convergence and performance improvement. We also evaluate the performance of the proposed method using numerical experiments with both synthetic and real-world data. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The goal in the whisper activity detection (WAD) is to find the whispered speech segments in a given noisy recording of whispered speech. Since whispering lacks the periodic glottal excitation, it resembles an unvoiced speech. This noise-like nature of the whispered speech makes WAD a more challenging task compared to a typical voice activity detection (VAD) problem. In this paper, we propose a feature based on the long term variation of the logarithm of the short-time sub-band signal energy for WAD. We also propose an automatic sub-band selection algorithm to maximally discriminate noisy whisper from noise. Experiments with eight noise types in four different signal-to-noise ratio (SNR) conditions show that, for most of the noises, the performance of the proposed WAD scheme is significantly better than that of the existing VAD schemes and whisper detection schemes when used for WAD.
Resumo:
Event-triggered sampling (ETS) is a new approach towards efficient signal analysis. The goal of ETS need not be only signal reconstruction, but also direct estimation of desired information in the signal by skillful design of event. We show a promise of ETS approach towards better analysis of oscillatory non-stationary signals modeled by a time-varying sinusoid, when compared to existing uniform Nyquist-rate sampling based signal processing. We examine samples drawn using ETS, with events as zero-crossing (ZC), level-crossing (LC), and extrema, for additive in-band noise and jitter in detection instant. We find that extrema samples are robust, and also facilitate instantaneous amplitude (IA), and instantaneous frequency (IF) estimation in a time-varying sinusoid. The estimation is proposed solely using extrema samples, and a local polynomial regression based least-squares fitting approach. The proposed approach shows improvement, for noisy signals, over widely used analytic signal, energy separation, and ZC based approaches (which are based on uniform Nyquist-rate sampling based data-acquisition and processing). Further, extrema based ETS in general gives a sub-sampled representation (relative to Nyquistrate) of a time-varying sinusoid. For the same data-set size captured with extrema based ETS, and uniform sampling, the former gives much better IA and IF estimation. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
This paper studies a pilot-assisted physical layer data fusion technique known as Distributed Co-Phasing (DCP). In this two-phase scheme, the sensors first estimate the channel to the fusion center (FC) using pilots sent by the latter; and then they simultaneously transmit their common data by pre-rotating them by the estimated channel phase, thereby achieving physical layer data fusion. First, by analyzing the symmetric mutual information of the system, it is shown that the use of higher order constellations (HOC) can improve the throughput of DCP compared to the binary signaling considered heretofore. Using an HOC in the DCP setting requires the estimation of the composite DCP channel at the FC for data decoding. To this end, two blind algorithms are proposed: 1) power method, and 2) modified K-means algorithm. The latter algorithm is shown to be computationally efficient and converges significantly faster than the conventional K-means algorithm. Analytical expressions for the probability of error are derived, and it is found that even at moderate to low SNRs, the modified K-means algorithm achieves a probability of error comparable to that achievable with a perfect channel estimate at the FC, while requiring no pilot symbols to be transmitted from the sensor nodes. Also, the problem of signal corruption due to imperfect DCP is investigated, and constellation shaping to minimize the probability of signal corruption is proposed and analyzed. The analysis is validated, and the promising performance of DCP for energy-efficient physical layer data fusion is illustrated, using Monte Carlo simulations.
Resumo:
The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.
Resumo:
Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.
Resumo:
We propose a multiple initialization based spectral peak tracking (MISPT) technique for heart rate monitoring from photoplethysmography (PPG) signal. MISPT is applied on the PPG signal after removing the motion artifact using an adaptive noise cancellation filter. MISPT yields several estimates of the heart rate trajectory from the spectrogram of the denoised PPG signal which are finally combined using a novel measure called trajectory strength. Multiple initializations help in correcting erroneous heart rate trajectories unlike the typical SPT which uses only single initialization. Experiments on the PPG data from 12 subjects recorded during intensive physical exercise show that the MISPT based heart rate monitoring indeed yields a better heart rate estimate compared to the SPT with single initialization. On the 12 datasets MISPT results in an average absolute error of 1.11 BPM which is lower than 1.28 BPM obtained by the state-of-the-art online heart rate monitoring algorithm.
Resumo:
The separation of independent sources from mixed observed data is a fundamental and challenging problem. In many practical situations, observations may be modelled as linear mixtures of a number of source signals, i.e. a linear multi-input multi-output system. A typical example is speech recordings made in an acoustic environment in the presence of background noise and/or competing speakers. Other examples include EEG signals, passive sonar applications and cross-talk in data communications. In this paper, we propose iterative algorithms to solve the n × n linear time invariant system under two different constraints. Some existing solutions for 2 × 2 systems are reviewed and compared.