83 resultados para Signal-to Noise Ratio (SNR)
Resumo:
We are addressing a new problem of improving automatic speech recognition performance, given multiple utterances of patterns from the same class. We have formulated the problem of jointly decoding K multiple patterns given a single Hidden Markov Model. It is shown that such a solution is possible by aligning the K patterns using the proposed Multi Pattern Dynamic Time Warping algorithm followed by the Constrained Multi Pattern Viterbi Algorithm The new formulation is tested in the context of speaker independent isolated word recognition for both clean and noisy patterns. When 10 percent of speech is affected by a burst noise at -5 dB Signal to Noise Ratio (local), it is shown that joint decoding using only two noisy patterns reduces the noisy speech recognition error rate to about 51 percent, when compared to the single pattern decoding using the Viterbi Algorithm. In contrast a simple maximization of individual pattern likelihoods, provides only about 7 percent reduction in error rate.
Resumo:
In many IEEE 802.11 WLAN deployments, wireless clients have a choice of access points (AP) to connect to. In current systems, clients associate with the access point with the strongest signal to noise ratio. However, such an association mechanism can lead to unequal load sharing, resulting in diminished system performance. In this paper, we first provide a numerical approach based on stochastic dynamic programming to find the optimal client-AP association algorithm for a small topology consisting of two access points. Using the value iteration algorithm, we determine the optimal association rule for the two-AP topology. Next, utilizing the insights obtained from the optimal association ride for the two-AP case, we propose a near-optimal heuristic that we call RAT. We test the efficacy of RAT by considering more realistic arrival patterns and a larger topology. Our results show that RAT performs very well in these scenarios as well. Moreover, RAT lends itself to a fairly simple implementation.
Resumo:
The use of energy harvesting (EH) nodes as cooperative relays is a promising and emerging solution in wireless systems such as wireless sensor networks. It harnesses the spatial diversity of a multi-relay network and addresses the vexing problem of a relay's batteries getting drained in forwarding information to the destination. We consider a cooperative system in which EH nodes volunteer to serve as amplify-and-forward relays whenever they have sufficient energy for transmission. For a general class of stationary and ergodic EH processes, we introduce the notion of energy constrained and energy unconstrained relays and analytically characterize the symbol error rate of the system. Further insight is gained by an asymptotic analysis that considers the cases where the signal-to-noise-ratio or the number of relays is large. Our analysis quantifies how the energy usage at an EH relay and, consequently, its availability for relaying, depends not only on the relay's energy harvesting process, but also on its transmit power setting and the other relays in the system. The optimal static transmit power setting at the EH relays is also determined. Altogether, our results demonstrate how a system that uses EH relays differs in significant ways from one that uses conventional cooperative relays.
Resumo:
We present observations of low-frequency recombination lines of carbon toward Cas A near 34.5 MHz (n similar to 575) using the Gauribidanur radio telescope and near 560 MHz (n similar to 225) and 770 MHz (n similar to 205) using the NRAO 140 foot (43 m) telescope in Greenbank. We also present high angular resolution (1') observations of the C270 alpha line near 332 MHz using the Very Large Array in B-configuration. A high signal-to-noise ratio spectrum is obtained at 34.5 MHz, which clearly shows a Voigt profile with distinct Lorentzian wings, resulting from significant pressure and radiation broadening at such high quantum numbers. The emission lines detected near 332, 550, and 770 MHz, on the other hand, are narrow and essentially Doppler-broadened. The measured Lorentzian width at 34.5 MHz constrains the allowed combinations of radiation temperature, electron density, and electron temperature in the line-forming region. Radiation broadening at 34.5 MHz places a lower limit of 115 pc on the separation between Cas A and the line-forming clouds. Modeling the variation in the integrated line-to-continuum ratio with frequency indicates that the region is likely to be associated with the cold atomic hydrogen component of the interstellar medium, and the physical properties of this region are likely to be T-e = 75 K, n(e) = 0.02 cm(-3), T-R100 = 3200 K, and n(H) T-e = 10,000 cm(-3) K. Comparison of the distribution of the C270 alpha recombination line emission across Cas A with that of (CO)-C-12 and H I also supports the above conclusion.
Resumo:
We report Doppler-only radar observations of Icarus at Goldstone at a transmitter frequency of 8510 MHz (3.5 cm wavelength) during 8-10 June 1996, the first radar detection of the object since 1968. Optimally filtered and folded spectra achieve a maximum opposite-circular (OC) polarization signal-to-noise ratio of about 10 and help to constrain Icarus' physical properties. We obtain an OC radar cross section of 0.05 km(2) (with a 35% uncertainty), which is less than values estimated by Goldstein (1969) and by Pettengill et al. (1969), and a circular polarization (SC/OC) ratio of 0.5+/-0.2. We analyze the echo power spectrum with a model incorporating the echo bandwidth B and a spectral shape parameter it, yielding a coupled constraint between B and n. We adopt 25 Hz as the lower bound on B, which gives a lower bound on the maximum pole-on breadth of about 0.6 km and upper bounds on the radar and optical albedos that are consistent with Icarus' tentative QS classification. The observed circular polarization ratio indicates a very rough near-surface at spatial scales of the order of the radar wavelength. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
An elementary combinatorial Tanner graph construction for a family of near-regular low density parity check (LDPC) codes achieving high girth is presented. These codes are near regular in the sense that the degree of a left/right vertex is allowed to differ by at most one from the average. The construction yields in quadratic time complexity an asymptotic code family with provable lower bounds on the rate and the girth for a given choice of block length and average degree. The construction gives flexibility in the choice of design parameters of the code like rate, girth and average degree. Performance simulations of iterative decoding algorithm for the AWGN channel on codes designed using the method demonstrate that these codes perform better than regular PEG codes and MacKay codes of similar length for all values of Signal to noise ratio.
Resumo:
Rate control regulates the instantaneous video bit -rate to maximize a picture quality metric while satisfying channel constraints. Typically, a quality metric such as Peak Signalto-Noise ratio (PSNR) or weighted signal -to-noise ratio(WSNR) is chosen out of convenience. However this metric is not always truly representative of perceptual video quality.Attempts to use perceptual metrics in rate control have been limited by the accuracy of the video quality metrics chosen.Recently, new and improved metrics of subjective quality such as the Video quality experts group's (VQEG) NTIA1 General Video Quality Model (VQM) have been proven to have strong correlation with subjective quality. Here, we apply the key principles of the NTIA -VQM model to rate control in order to maximize perceptual video quality. Our experiments demonstrate that applying NTIA -VQM motivated metrics to standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 / MSE based implementation.
Resumo:
In this paper, we report on the concept and the design principle of ultrafast Raman loss spectroscopy (URLS) as a structure-elucidating tool. URLS is an analogue of stimulated Raman scattering (SRS) but more sensitive than SRS with better signal-to-noise ratio. It involves the interaction of two laser sources, namely, a picosecond (ps) Raman pump pulse and a white-light (WL) continuum, with a sample, leading to the generation of loss signals on the higher energy (blue) side with respect to the wavelength of the Raman pump unlike the gain signal observed on the lower energy (red) side in SRS. These loss signals are at least 1.5 times more intense than the SRS signals. An experimental study providing an insight into the origin of this extra intensity in URLS as compared to SRS is reported. Furthermore, the very requirement of the experimental protocol for the signal detection to be on the higher energy side by design eliminates the interference from fluorescence, which appears on the red side. Unlike CARS, URLS signals are not precluded by the non-resonant background and, being a self-phase-matched process, URLS is experimentally easier. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
We fabricated a reflectance based sensor which relies on the diffraction pattern generated from a bio-microarray where an underlying thin film structure enhances the diffracted intensity from molecular layers. The zero order diffraction represents the background signal and the higher orders represent the phase difference between the array elements and the background. By taking the differential ratio of the first and zero order diffraction signals we get a quantitative measure of molecular binding while simultaneously rejecting common mode fluctuations. We improved the signal-to-noise ratio by an order of magnitude with this ratiometric approach compared to conventional single channel detection. In addition, we use a lithography based approach for fabricating microarrays which results in spot sizes as small as 5 micron diameter unlike the 100 micron spots from inkjet printing and is therefore capable of a high degree of multiplexing. We will describe the real-time measurement of adsorption of charged polymers and bulk refractometry using this technique. The lack of moving parts for point scanning of the microarray and the differential ratiometric measurements using diffracted orders from the same probe beam allows us to make real-time measurements in spite of noise arising from thermal or mechanical fluctuations in the fluid sample above the sensor surface. Further, the lack of moving parts leads to considerable simplification in the readout hardware permitting the use of this technique in compact point of care sensors.
Resumo:
Metal-based piezoresistive sensing devices could find a much wider applicability if their sensitivity to mechanical strain could be substantially improved. Here, we report a simple method to enhance the strain sensitivity of metal films by over two orders of magnitude and demonstrate it on specially designed microcantilevers. By locally inhomogenizing thin gold films using controlled electromigration, we have achieved a logarithmic divergence in the strain sensitivity with progressive microstructural modification. The enhancement in strain sensitivity could be explained using non-universal tunneling-percolation transport. We find that the Johnson noise limited signal-to-noise ratio is an order of magnitude better than silicon piezoresistors. This method creates a robust platform for engineering low resistance, high gauge factor metallic piezoresistors that may have profound impact on micro and nanoscale self-sensing technology. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4761817]
Resumo:
We propose an iterative data reconstruction technique specifically designed for multi-dimensional multi-color fluorescence imaging. Markov random field is employed (for modeling the multi-color image field) in conjunction with the classical maximum likelihood method. It is noted that, ill-posed nature of the inverse problem associated with multi-color fluorescence imaging forces iterative data reconstruction. Reconstruction of three-dimensional (3D) two-color images (obtained from nanobeads and cultured cell samples) show significant reduction in the background noise (improved signal-to-noise ratio) with an impressive overall improvement in the spatial resolution (approximate to 250 nm) of the imaging system. Proposed data reconstruction technique may find immediate application in 3D in vivo and in vitro multi-color fluorescence imaging of biological specimens. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4769058]
Resumo:
Low density parity-check (LDPC) codes are a class of linear block codes that are decoded by running belief propagation (BP) algorithm or log-likelihood ratio belief propagation (LLR-BP) over the factor graph of the code. One of the disadvantages of LDPC codes is the onset of an error floor at high values of signal to noise ratio caused by trapping sets. In this paper, we propose a two stage decoder to deal with different types of trapping sets. Oscillating trapping sets are taken care by the first stage of the decoder and the elementary trapping sets are handled by the second stage of the decoder. Simulation results on the regular PEG (504,252,3,6) code and the irregular PEG (1024,518,15,8) code shows that the proposed two stage decoder performs significantly better than the standard decoder.
Resumo:
We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access Interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the singularity minimization criterion under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs better than the adaptive network coding scheme.
Resumo:
Due to the inherent feedback in a decision feedback equalizer (DFE) the minimum mean square error (MMSE) or Wiener solution is not known exactly. The main difficulty in such analysis is due to the propagation of the decision errors, which occur because of the feedback. Thus in literature, these errors are neglected while designing and/or analyzing the DFEs. Then a closed form expression is obtained for Wiener solution and we refer this as ideal DFE (IDFE). DFE has also been designed using an iterative and computationally efficient alternative called least mean square (LMS) algorithm. However, again due to the feedback involved, the analysis of an LMS-DFE is not known so far. In this paper we theoretically analyze a DFE taking into account the decision errors. We study its performance at steady state. We then study an LMS-DFE and show the proximity of LMS-DFE attractors to that of the optimal DFE Wiener filter (obtained after considering the decision errors) at high signal to noise ratios (SNR). Further, via simulations we demonstrate that, even at moderate SNRs, an LMS-DFE is close to the MSE optimal DFE. Finally, we compare the LMS DFE attractors with IDFE via simulations. We show that an LMS equalizer outperforms the IDFE. In fact, the performance improvement is very significant even at high SNRs (up to 33%), where an IDFE is believed to be closer to the optimal one. Towards the end, we briefly discuss the tracking properties of the LMS-DFE.
Resumo:
We study the phenomenon of electromagnetically induced transparency and absorption (EITA) using a control laser with a Laguerre-Gaussian (LG) profile instead of the usual Gaussian profile, and observe significant narrowing of the resonance widths. Aligning the probe beam to the central hole in the doughnut-shaped LG control beam allows simultaneously a strong control intensity required for high signal-to-noise ratio and a low intensity in the probe region required to get narrow resonances. Experiments with an expanded Gaussian control and a second-order LG control show that transit time and orbital angular momentum do not play a significant role. This explanation is borne out by a density-matrix analysis with a radially varying control Rabi frequency. We observe these resonances using degenerate two-level transitions in the D-2 line of Rb-87 in a room temperature vapor cell, and an EIA resonance with width up to 20 times below the natural linewidth for the F = 2 -> F' = 3 transition. Thus the use of LG beams should prove advantageous in all applications of EITA and other kinds of pump-probe spectroscopy as well.