5 resultados para Low signal-to-noise ratio regime

em Bucknell University Digital Commons - Pensilvania - USA


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The signal-to-noise ratio of a monoexponentially decaying signal exhibits a maximum at an evolution time of approximately 1.26 T-2. It has previously been thought that there is no closed-form solution to express this maximum. We report in this note that this maximum can be represented in a specific, analytical closed form in terms of the negative real branch of an inverse function known as the Lambert W function. The Lambert function is finding increasing use in the solution of problems in a variety of areas in the physical sciences. (C) 2014 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent optimizations of NMR spectroscopy have focused their attention on innovations in new hardware, such as novel probes and higher field strengths. Only recently has the potential to enhance the sensitivity of NMR through data acquisition strategies been investigated. This thesis has focused on the practice of enhancing the signal-to-noise ratio (SNR) of NMR using non-uniform sampling (NUS). After first establishing the concept and exact theory of compounding sensitivity enhancements in multiple non-uniformly sampled indirect dimensions, a new result was derived that NUS enhances both SNR and resolution at any given signal evolution time. In contrast, uniform sampling alternately optimizes SNR (t < 1.26T2) or resolution (t~3T2), each at the expense of the other. Experiments were designed and conducted on a plant natural product to explore this behavior of NUS in which the SNR and resolution continue to improve as acquisition time increases. Possible absolute sensitivity improvements of 1.5 and 1.9 are possible in each indirect dimension for matched and 2x biased exponentially decaying sampling densities, respectively, at an acquisition time of ¿T2. Recommendations for breaking into the linear regime of maximum entropy (MaxEnt) are proposed. Furthermore, examination into a novel sinusoidal sampling density resulted in improved line shapes in MaxEnt reconstructions of NUS data and comparable enhancement to a matched exponential sampling density. The Absolute Sample Sensitivity derived and demonstrated here for NUS holds great promise in expanding the adoption of non-uniform sampling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-uniform sampling (NUS) has been established as a route to obtaining true sensitivity enhancements when recording indirect dimensions of decaying signals in the same total experimental time as traditional uniform incrementation of the indirect evolution period. Theory and experiments have shown that NUS can yield up to two-fold improvements in the intrinsic signal-to-noise ratio (SNR) of each dimension, while even conservative protocols can yield 20-40 % improvements in the intrinsic SNR of NMR data. Applications of biological NMR that can benefit from these improvements are emerging, and in this work we develop some practical aspects of applying NUS nD-NMR to studies that approach the traditional detection limit of nD-NMR spectroscopy. Conditions for obtaining high NUS sensitivity enhancements are considered here in the context of enabling H-1,N-15-HSQC experiments on natural abundance protein samples and H-1,C-13-HMBC experiments on a challenging natural product. Through systematic studies we arrive at more precise guidelines to contrast sensitivity enhancements with reduced line shape constraints, and report an alternative sampling density based on a quarter-wave sinusoidal distribution that returns the highest fidelity we have seen to date in line shapes obtained by maximum entropy processing of non-uniformly sampled data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smoke spikes occurring during transient engine operation have detrimental health effects and increase fuel consumption by requiring more frequent regeneration of the diesel particulate filter. This paper proposes a decision tree approach to real-time detection of smoke spikes for control and on-board diagnostics purposes. A contemporary, electronically controlled heavy-duty diesel engine was used to investigate the deficiencies of smoke control based on the fuel-to-oxygen-ratio limit. With the aid of transient and steady state data analysis and empirical as well as dimensional modeling, it was shown that the fuel-to-oxygen ratio was not estimated correctly during the turbocharger lag period. This inaccuracy was attributed to the large manifold pressure ratios and low exhaust gas recirculation flows recorded during the turbocharger lag period, which meant that engine control module correlations for the exhaust gas recirculation flow and the volumetric efficiency had to be extrapolated. The engine control module correlations were based on steady state data and it was shown that, unless the turbocharger efficiency is artificially reduced, the large manifold pressure ratios observed during the turbocharger lag period cannot be achieved at steady state. Additionally, the cylinder-to-cylinder variation during this period were shown to be sufficiently significant to make the average fuel-to-oxygen ratio a poor predictor of the transient smoke emissions. The steady state data also showed higher smoke emissions with higher exhaust gas recirculation fractions at constant fuel-to-oxygen-ratio levels. This suggests that, even if the fuel-to-oxygen ratios were to be estimated accurately for each cylinder, they would still be ineffective as smoke limiters. A decision tree trained on snap throttle data and pruned with engineering knowledge was able to use the inaccurate engine control module estimates of the fuel-to-oxygen ratio together with information on the engine control module estimate of the exhaust gas recirculation fraction, the engine speed, and the manifold pressure ratio to predict 94% of all spikes occurring over the Federal Test Procedure cycle. The advantages of this non-parametric approach over other commonly used parametric empirical methods such as regression were described. An application of accurate smoke spike detection in which the injection pressure is increased at points with a high opacity to reduce the cumulative particulate matter emissions substantially with a minimum increase in the cumulative nitrogrn oxide emissions was illustrated with dimensional and empirical modeling.