240 resultados para 090609 Signal Processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Event-triggered sampling (ETS) is a new approach towards efficient signal analysis. The goal of ETS need not be only signal reconstruction, but also direct estimation of desired information in the signal by skillful design of event. We show a promise of ETS approach towards better analysis of oscillatory non-stationary signals modeled by a time-varying sinusoid, when compared to existing uniform Nyquist-rate sampling based signal processing. We examine samples drawn using ETS, with events as zero-crossing (ZC), level-crossing (LC), and extrema, for additive in-band noise and jitter in detection instant. We find that extrema samples are robust, and also facilitate instantaneous amplitude (IA), and instantaneous frequency (IF) estimation in a time-varying sinusoid. The estimation is proposed solely using extrema samples, and a local polynomial regression based least-squares fitting approach. The proposed approach shows improvement, for noisy signals, over widely used analytic signal, energy separation, and ZC based approaches (which are based on uniform Nyquist-rate sampling based data-acquisition and processing). Further, extrema based ETS in general gives a sub-sampled representation (relative to Nyquistrate) of a time-varying sinusoid. For the same data-set size captured with extrema based ETS, and uniform sampling, the former gives much better IA and IF estimation. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper studies a pilot-assisted physical layer data fusion technique known as Distributed Co-Phasing (DCP). In this two-phase scheme, the sensors first estimate the channel to the fusion center (FC) using pilots sent by the latter; and then they simultaneously transmit their common data by pre-rotating them by the estimated channel phase, thereby achieving physical layer data fusion. First, by analyzing the symmetric mutual information of the system, it is shown that the use of higher order constellations (HOC) can improve the throughput of DCP compared to the binary signaling considered heretofore. Using an HOC in the DCP setting requires the estimation of the composite DCP channel at the FC for data decoding. To this end, two blind algorithms are proposed: 1) power method, and 2) modified K-means algorithm. The latter algorithm is shown to be computationally efficient and converges significantly faster than the conventional K-means algorithm. Analytical expressions for the probability of error are derived, and it is found that even at moderate to low SNRs, the modified K-means algorithm achieves a probability of error comparable to that achievable with a perfect channel estimate at the FC, while requiring no pilot symbols to be transmitted from the sensor nodes. Also, the problem of signal corruption due to imperfect DCP is investigated, and constellation shaping to minimize the probability of signal corruption is proposed and analyzed. The analysis is validated, and the promising performance of DCP for energy-efficient physical layer data fusion is illustrated, using Monte Carlo simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a multiple initialization based spectral peak tracking (MISPT) technique for heart rate monitoring from photoplethysmography (PPG) signal. MISPT is applied on the PPG signal after removing the motion artifact using an adaptive noise cancellation filter. MISPT yields several estimates of the heart rate trajectory from the spectrogram of the denoised PPG signal which are finally combined using a novel measure called trajectory strength. Multiple initializations help in correcting erroneous heart rate trajectories unlike the typical SPT which uses only single initialization. Experiments on the PPG data from 12 subjects recorded during intensive physical exercise show that the MISPT based heart rate monitoring indeed yields a better heart rate estimate compared to the SPT with single initialization. On the 12 datasets MISPT results in an average absolute error of 1.11 BPM which is lower than 1.28 BPM obtained by the state-of-the-art online heart rate monitoring algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we first recast the generalized symmetric eigenvalue problem, where the underlying matrix pencil consists of symmetric positive definite matrices, into an unconstrained minimization problem by constructing an appropriate cost function, We then extend it to the case of multiple eigenvectors using an inflation technique, Based on this asymptotic formulation, we derive a quasi-Newton-based adaptive algorithm for estimating the required generalized eigenvectors in the data case. The resulting algorithm is modular and parallel, and it is globally convergent with probability one, We also analyze the effect of inexact inflation on the convergence of this algorithm and that of inexact knowledge of one of the matrices (in the pencil) on the resulting eigenstructure. Simulation results demonstrate that the performance of this algorithm is almost identical to that of the rank-one updating algorithm of Karasalo. Further, the performance of the proposed algorithm has been found to remain stable even over 1 million updates without suffering from any error accumulation problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the availability of a huge amount of video data on various sources, efficient video retrieval tools are increasingly in demand. Video being a multi-modal data, the perceptions of ``relevance'' between the user provided query video (in case of Query-By-Example type of video search) and retrieved video clips are subjective in nature. We present an efficient video retrieval method that takes user's feedback on the relevance of retrieved videos and iteratively reformulates the input query feature vectors (QFV) for improved video retrieval. The QFV reformulation is done by a simple, but powerful feature weight optimization method based on Simultaneous Perturbation Stochastic Approximation (SPSA) technique. A video retrieval system with video indexing, searching and relevance feedback (RF) phases is built for demonstrating the performance of the proposed method. The query and database videos are indexed using the conventional video features like color, texture, etc. However, we use the comprehensive and novel methods of feature representations, and a spatio-temporal distance measure to retrieve the top M videos that are similar to the query. In feedback phase, the user activated iterative on the previously retrieved videos is used to reformulate the QFV weights (measure of importance) that reflect the user's preference, automatically. It is our observation that a few iterations of such feedback are generally sufficient for retrieving the desired video clips. The novel application of SPSA based RF for user-oriented feature weights optimization makes the proposed method to be distinct from the existing ones. The experimental results show that the proposed RF based video retrieval exhibit good performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A defect-selective photothermal imaging system for the diagnostics of optical coatings is demonstrated. The instrument has been optimized for pump and probe parameters, detector performance, and signal processing algorithm. The imager is capable of mapping purely optical or thermal defects efficiently in coatings of low damage threshold and low absorbance. Detailed mapping of minor inhomogeneities at low pump power has been achieved through the simultaneous action of a low-noise fiber optic photothermal beam defection sensor and a common-mode-rejection demodulation (CMRD) technique. The linearity and sensitivity of the sensor have been examined theoretically and experimentally, and the signal to noise ratio improvement factor is found to be about 110 compared to a conventional bicell photodiode. The scanner is so designed that mapping of static or shock sensitive samples is possible. In the case of a sample with absolute absorptance of 3.8 x 10(-4), a change in absorptance of about 0.005 x 10(-4) has been detected without ambiguity, ensuring a contrast parameter of 760. This is about 1085% improvement over the conventional approach containing a bicell photodiode, at the same pump power. The merits of the system have been demonstrated by mapping two intentionally created damage sites in a MgF2 coating on fused silica at different excitation powers. Amplitude and phase maps were recorded for thermally thin and thick cases, and the results are compared to demonstrate a case which, in conventional imaging, would lead to a deceptive conclusion regarding the type and location of the damage. Also, a residual damage profile created by long term irradiation with high pump power density has been depicted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, expressions for convolution multiplication properties of DCT IV and DST IV are derived starting from equivalent DFT representations. Using these expressions methods for implementing linear filtering through block convolution in the DCT IV and DST IV domain are proposed. Techniques developed for DCT IV and DST IV are further extended to MDCT and MDST where the filter implementation is near exact for symmetric filters and approximate for non-symmetric filters. No additional overlapping is required for implementing the symmetric filtering in the MDCT domain and hence the proposed algorithm is computationally competitive with DFT based systems. Moreover, inherent 50% overlap between the adjacent frames used for MDCT/MDST domain reduces the blocking artifacts due to block processing or quantization. The techniques are computationally efficient for symmetric filters and provides a new alternative to DFT based convolution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Eklundh's (1972) algorithm to transpose a large matrix stored on an external device such as a disc has been programmed and tested. A simple description of computer implementation is given in this note.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the issue of complexity for vector quantization (VQ) of wide-band speech LSF (line spectrum frequency) parameters. The recently proposed switched split VQ (SSVQ) method provides better rate-distortion (R/D) performance than the traditional split VQ (SVQ) method, even at the requirement of lower computational complexity. but at the expense of much higher memory. We develop the two stage SVQ (TsSVQ) method, by which we gain both the memory and computational advantages and still retain good R/D performance. The proposed TsSVQ method uses a full dimensional quantizer in its first stage for exploiting all the higher dimensional coding advantages and then, uses an SVQ method for quantizing the residual vector in the second stage so as to reduce the complexity. We also develop a transform domain residual coding method in this two stage architecture such that it further reduces the computational complexity. To design an effective residual codebook in the second stage, variance normalization of Voronoi regions is carried out which leads to the design of two new methods, referred to as normalized two stage SVQ (NTsSVQ) and normalized two stage transform domain SVQ (NTsTrSVQ). These two new methods have complimentary strengths and hence, they are combined in a switched VQ mode which leads to the further improvement in R/D performance, but retaining the low complexity requirement. We evaluate the performances of new methods for wide-band speech LSF parameter quantization and show their advantages over established SVQ and SSVQ methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

n this paper we study the genericity of simultaneous stabilizability, simultaneous strong stabilizability, and simultaneous pole assignability, in linear multivariable systems. The main results of the paper had been previously established by Ghosh and Byrnes using state-space methods. In contrast, the proofs in the present paper are based on input-output arguments, and are much simpler to follow, especially in the case of simultaneous and simultaneous strong stabilizability. Moreover, the input-output methods used here suggest computationally reliable algorithms for solving these two types of problems. In addition to the main results, we also prove some lemmas on generic greatest common divisors which are of independent interest.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we generalize the existing rate-one space frequency (SF) and space-time frequency (STF) code constructions. The objective of this exercise is to provide a systematic design of full-diversity STF codes with high coding gain. Under this generalization, STF codes are formulated as linear transformations of data. Conditions on these linear transforms are then derived so that the resulting STF codes achieve full diversity and high coding gain with a moderate decoding complexity. Many of these conditions involve channel parameters like delay profile (DP) and temporal correlation. When these quantities are not available at the transmitter, design of codes that exploit full diversity on channels with arbitrary DIP and temporal correlation is considered. Complete characterization of a class of such robust codes is provided and their bit error rate (BER) performance is evaluated. On the other hand, when channel DIP and temporal correlation are available at the transmitter, linear transforms are optimized to maximize the coding gain of full-diversity STF codes. BER performance of such optimized codes is shown to be better than those of existing codes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper presents a new adaptive delta modulator, called the hybrid constant factor incremental delta modulator (HCFIDM), which uses instantaneous as well as syllabic adaptation of the step size. Three instantaneous algorithms have been used: two new instantaneous algorithms (CFIDM-3 and CFIDM-2) and the third, Song's voice ADM (SVADM). The quantisers have been simulated on a digital computer and their performances studied. The figure of merit used is the SNR with correlated, /?C-shaped Gaussian signals and real speech as the input. The results indicate that the hybrid technique is superior to the nonhybrid adaptive quantisers. Also, the two new instantaneous algorithms developed have improved SNR and fast response to step inputs as compared to the earlier systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A low cost 12 T pulsed magnet system has been integrated with a closed-cycle helium refrigerator. The copper solenoid is directly immersed in liquid nitrogen for reduced electrical resistance and more efficient heat transfer. This ensures a minimal delay of few minutes between pulses. The sample is mounted on the cold finger of the refrigerator and, along with the surrounding vacuum shroud, is inserted into the bore of the solenoid. When combined with software lock-in signal processing to reduce noise, quick but accurate measurements can be performed at temperatures 4 K-300 K up to 12 T. Quantum Hall effect data in a p-channel SiGe/Si heterostructure has been used to calibrate the instrument against a commercial superconducting magnet. Its versatility as a routine characterization tool is demonstrated bymeasuring parallel conduction in Si/SiGe modulation doped heterostructures.