926 resultados para signal noise
Resumo:
Herein we report the first applications of TCNQ as a rapid and highly sensitive off-the-shelf cyanide detector. As a proof-of-concept, we have applied a kinetically selective single-electron transfer (SET) from cyanide to deep-lying LUMO orbitals of TCNQ to generate a persistently stable radical anion (TCNQ(center dot-)), under ambient condition. In contrast to the known cyanide sensors that operate with limited signal outputs, TCNQ(center dot-) offers a unique multiple signaling platform. The signal readability is facilitated through multichannel absorption in the UV-vis-NIR region and scattering-based spectroscopic methods like Raman spectroscopy and hyper Rayleigh scattering techniques. Particularly notable is the application of the intense 840 nm NIR absorption band to detect cyanide. This can be useful for avoiding background interference in the UV-vis region predominant in biological samples. We also demonstrate the fabrication of a practical electronic device with TCNQ as a detector. The device generates multiorder enhancement in current with cyanide because of the formation of the conductive TCNQ(center dot-).
Resumo:
This paper presents the formulation and performance analysis of four techniques for detection of a narrowband acoustic source in a shallow range-independent ocean using an acoustic vector sensor (AVS) array. The array signal vector is not known due to the unknown location of the source. Hence all detectors are based on a generalized likelihood ratio test (GLRT) which involves estimation of the array signal vector. One non-parametric and three parametric (model-based) signal estimators are presented. It is shown that there is a strong correlation between the detector performance and the mean-square signal estimation error. Theoretical expressions for probability of false alarm and probability of detection are derived for all the detectors, and the theoretical predictions are compared with simulation results. It is shown that the detection performance of an AVS array with a certain number of sensors is equal to or slightly better than that of a conventional acoustic pressure sensor array with thrice as many sensors.
Resumo:
While the recent discovery of a Higgs-like boson at the LHC is an extremely important and encouraging step towards the discovery of the complete Standard Model (SM), the current information on this state does not rule out possibility of beyond standard model (BSM) physics. In fact the current data can still accommodate reasonably large values of the branching fractions of the Higgs into a channel with `invisible' decay products, such a channel being also well motivated theoretically. In this study we revisit the possibility of detecting the Higgs in this invisible channel for both choices of the LHC energies, 8 and 14 TeV, for two production modes; vector boson fusion (VBF) and associated production (ZH). We perform a comprehensive collider analysis for all the above channels and project the reach of LHC to constrain the invisible decay branching fraction for both 8 and 14 TeV energies. For the ZH case we consider decays of the Z boson into a pair of leptons as well as a b (b) over bar pair. For the VBF channel the sensitivity is found to be more than 5 sigma for both the energies up to an invisible branching ratio (Br-inv) similar to 0.80, with luminosities similar to 20/30 fb(-1). The sensitivity is further extended to values of Br-inv similar to 0.25 for 300 fb(-1) at 14 TeV. However the reach is found to be more modest for the ZH mode with leptonic final state; with about 3.5 sigma for the planned luminosity at 8 TeV, reaching 8 sigma only for 14 TeV for 50 fb(-1). In spite of the much larger branching ratio (BR) of the Z into a b (b) over bar channel compared to the dilepton case, the former channel, can provide useful reach up to Br-inv greater than or similar to 0.75, only for the higher luminosity (300 fb(-1)) option using both jet-substructure and jet clustering methods. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.
Resumo:
Photoacoustic/thermoacoustic imaging is an emerging hybrid imaging modality combining optical/microwave imaging with ultrasound imaging. The photoacoustic/thermoacoustic signal generated are affected by the nature of excitation pulse waveform, pulse width, target object size, transducer size etc. In this study k-wave was used to simulate various configurations of excitation pulse, transducer types, and target object sizes and to see their effect on the photoacoustic/thermoacoustic signals. Numerical blood vessel phantom was also used to see the effect of various pulse waveform and excitation pulse width on the reconstructed images. This study will help in optimizing transducer design and reconstruction methods to obtain the superior reconstructed image.
Resumo:
The amplitude-modulation (AM) and phase-modulation (PM) of an amplitude-modulated frequency-modulated (AM-FM) signal are defined as the modulus and phase angle, respectively, of the analytic signal (AS). The FM is defined as the derivative of the PM. However, this standard definition results in a PM with jump discontinuities in cases when the AM index exceeds unity, resulting in an FM that contains impulses. We propose a new approach to define smooth AM, PM, and FM for the AS, where the PM is computed as the solution to an optimization problem based on a vector interpretation of the AS. Our approach is directly linked to the fractional Hilbert transform (FrHT) and leads to an eigenvalue problem. The resulting PM and AM are shown to be smooth, and in particular, the AM turns out to be bipolar. We show an equivalence of the eigenvalue formulation to the square of the AS, and arrive at a simple method to compute the smooth PM. Some examples on synthesized and real signals are provided to validate the theoretical calculations.
Resumo:
A power scalable receiver architecture is presented for low data rate Wireless Sensor Network (WSN) applications in 130nm RF-CMOS technology. Power scalable receiver is motivated by the ability to leverage lower run-time performance requirement to save power. The proposed receiver is able to switch power settings based on available signal and interference levels while maintaining requisite BER. The Low-IF receiver consists of Variable Noise and Linearity LNA, IQ Mixers, VGA, Variable Order Complex Bandpass Filter and Variable Gain and Bandwidth Amplifier (VGBWA) capable of driving variable sampling rate ADC. Various blocks have independent power scaling controls depending on their noise, gain and interference rejection (IR) requirements. The receiver is designed for constant envelope QPSK-type modulation with 2.4GHz RF input, 3MHz IF and 2MHz bandwidth. The chip operates at 1V Vdd with current scalable from 4.5mA to 1.3mA and chip area of 0.65mm2.
Resumo:
The analysis of modulation schemes for the physical layer network-coded two way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. Depending on the signal set used at the end nodes, the minimum distance of the effective constellation seen at the relay becomes zero for a finite number of channel fade states referred as the singular fade states. The singular fade states fall into the following two classes: (i) the ones which are caused due to channel outage and whose harmful effect cannot be mitigated by adaptive network coding called the non-removable singular fade states and (ii) the ones which occur due to the choice of the signal set and whose harmful effects can be removed called the removable singular fade states. In this paper, we derive an upper bound on the average end-to-end Symbol Error Rate (SER), with and without adaptive network coding at the relay, for a Rician fading scenario. It is shown that without adaptive network coding, at high Signal to Noise Ratio (SNR), the contribution to the end-to-end SER comes from the following error events which fall as SNR-1: the error events associated with the removable and nonremovable singular fade states and the error event during the BC phase. In contrast, for the adaptive network coding scheme, the error events associated with the removable singular fade states fall as SNR-2, thereby providing a coding gain over the case when adaptive network coding is not used. Also, it is shown that for a Rician fading channel, the error during the MA phase dominates over the error during the BC phase. Hence, adaptive network coding, which improves the performance during the MA phase provides more gain in a Rician fading scenario than in a Rayleigh fading scenario. Furthermore, it is shown that for large Rician factors, among those removable singular fade states which have the same magnitude, those which have the least absolute value of the phase - ngle alone contribute dominantly to the end-to-end SER and it is sufficient to remove the effect of only such singular fade states.
Resumo:
Epoch is defined as the instant of significant excitation within a pitch period of voiced speech. Epoch extraction continues to attract the interest of researchers because of its significance in speech analysis. Existing high performance epoch extraction algorithms require either dynamic programming techniques or a priori information of the average pitch period. An algorithm without such requirements is proposed based on integrated linear prediction residual (ILPR) which resembles the voice source signal. Half wave rectified and negated ILPR (or Hilbert transform of ILPR) is used as the pre-processed signal. A new non-linear temporal measure named the plosion index (PI) has been proposed for detecting `transients' in speech signal. An extension of PI, called the dynamic plosion index (DPI) is applied on pre-processed signal to estimate the epochs. The proposed DPI algorithm is validated using six large databases which provide simultaneous EGG recordings. Creaky and singing voice samples are also analyzed. The algorithm has been tested for its robustness in the presence of additive white and babble noise and on simulated telephone quality speech. The performance of the DPI algorithm is found to be comparable or better than five state-of-the-art techniques for the experiments considered.
Resumo:
In this paper, we consider signal detection in nt × nr underdetermined MIMO (UD-MIMO) systems, where i) nt >; nr with a overload factor α = nt over nr >; 1, ii) nt symbols are transmitted per channel use through spatial multiplexing, and iii) nt, nr are large (in the range of tens). A low-complexity detection algorithm based on reactive tabu search is considered. A variable threshold based stopping criterion is proposed which offers near-optimal performance in large UD-MIMO systems at low complexities. A lower bound on the maximum likelihood (ML) bit error performance of large UD-MIMO systems is also obtained for comparison. The proposed algorithm is shown to achieve BER performance close to the ML lower bound within 0.6 dB at an uncoded BER of 10-2 in 16 × 8 V-BLAST UD-MIMO system with 4-QAM (32 bps/Hz). Similar near-ML performance results are shown for 32 × 16, 32 × 24 V-BLAST UD-MIMO with 4-QAM/16-QAM as well. A performance and complexity comparison between the proposed algorithm and the λ-generalized sphere decoder (λ-GSD) algorithm for UD-MIMO shows that the proposed algorithm achieves almost the same performance of λ-GSD but at a significantly lesser complexity.
Resumo:
Structural dynamics of dendritic spines is one of the key correlative measures of synaptic plasticity for encoding short-term and long-term memory. Optical studies of structural changes in brain tissue using confocal microscopy face difficulties of scattering. This results in low signal-to-noise ratio and thus limiting the imaging depth to few tens of microns. Multiphoton microscopy (MpM) overcomes this limitation by using low-energy photons to cause localized excitation and achieve high resolution in all three dimensions. Multiple low-energy photons with longer wavelengths minimize scattering and allow access to deeper brain regions at several hundred microns. In this article, we provide a basic understanding of the physical phenomena that give MpM an edge over conventional microscopy. Further, we highlight a few of the key studies in the field of learning and memory which would not have been possible without the advent of MpM.
Resumo:
We propose to employ bilateral filters to solve the problem of edge detection. The proposed methodology presents an efficient and noise robust method for detecting edges. Classical bilateral filters smooth images without distorting edges. In this paper, we modify the bilateral filter to perform edge detection, which is the opposite of bilateral smoothing. The Gaussian domain kernel of the bilateral filter is replaced with an edge detection mask, and Gaussian range kernel is replaced with an inverted Gaussian kernel. The modified range kernel serves to emphasize dissimilar regions. The resulting approach effectively adapts the detection mask according as the pixel intensity differences. The results of the proposed algorithm are compared with those of standard edge detection masks. Comparisons of the bilateral edge detector with Canny edge detection algorithm, both after non-maximal suppression, are also provided. The results of our technique are observed to be better and noise-robust than those offered by methods employing masks alone, and are also comparable to the results from Canny edge detector, outperforming it in certain cases.
Resumo:
In contemporary wideband orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE) and WiMAX, different subcarriers over which a codeword is transmitted may experience different signal-to-noise-ratios (SNRs). Thus, adaptive modulation and coding (AMC) in these systems is driven by a vector of subcarrier SNRs experienced by the codeword, and is more involved. Exponential effective SNR mapping (EESM) simplifies the problem by mapping this vector into a single equivalent fiat-fading SNR. Analysis of AMC using EESM is challenging owing to its non-linear nature and its dependence on the modulation and coding scheme. We first propose a novel statistical model for the EESM, which is based on the Beta distribution. It is motivated by the central limit approximation for random variables with a finite support. It is simpler and as accurate as the more involved ad hoc models proposed earlier. Using it, we develop novel expressions for the throughput of a point-to-point OFDM link with multi-antenna diversity that uses EESM for AMC. We then analyze a general, multi-cell OFDM deployment with co-channel interference for various frequency-domain schedulers. Extensive results based on LTE and WiMAX are presented to verify the model and analysis, and gain new insights.
Resumo:
An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior.
Resumo:
The Large Hadron Collider (LHC) has completed its run at 8 TeV with the experiments ATLAS and CMS having collected about 25 fb(-1) of data each. Discovery of a light Higgs boson coupled with lack of evidence for supersymmetry at the LHC so far, has motivated studies of supersymmetry in the context of naturalness with the principal focus being the third generation squarks. In this work, we analyze the prospects of the flavor violating decay mode (t) over tilde (1) -> c chi(0)(1) at 8 and 13 TeV center-of-mass energy at the LHC. This channel is also relevant in the dark matter context for the stop-coannihilation scenario, where the relic density depends on the mass difference between the lighter stop quark ((t) over tilde (1)) and the lightest neutralino (chi(0)(1)) states. This channel is extremely challenging to probe, especially for situations when the mass difference between the lighter stop quark and the lightest neutralino is small. Using certain kinematical properties of signal events we find that the level of backgrounds can be reduced substantially. We find that the prospect for this channel is limited due to the low production cross section for top squarks and limited luminosity at 8 TeV, but at the 13 TeV LHC with 100 fb(-1) luminosity, it is possible to probe top squarks with masses up to similar to 450 GeV. We also discuss how the sensitivity could be significantly improved by tagging charm jets.