994 resultados para signal reconstruction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

While the recent discovery of a Higgs-like boson at the LHC is an extremely important and encouraging step towards the discovery of the complete Standard Model (SM), the current information on this state does not rule out possibility of beyond standard model (BSM) physics. In fact the current data can still accommodate reasonably large values of the branching fractions of the Higgs into a channel with `invisible' decay products, such a channel being also well motivated theoretically. In this study we revisit the possibility of detecting the Higgs in this invisible channel for both choices of the LHC energies, 8 and 14 TeV, for two production modes; vector boson fusion (VBF) and associated production (ZH). We perform a comprehensive collider analysis for all the above channels and project the reach of LHC to constrain the invisible decay branching fraction for both 8 and 14 TeV energies. For the ZH case we consider decays of the Z boson into a pair of leptons as well as a b (b) over bar pair. For the VBF channel the sensitivity is found to be more than 5 sigma for both the energies up to an invisible branching ratio (Br-inv) similar to 0.80, with luminosities similar to 20/30 fb(-1). The sensitivity is further extended to values of Br-inv similar to 0.25 for 300 fb(-1) at 14 TeV. However the reach is found to be more modest for the ZH mode with leptonic final state; with about 3.5 sigma for the planned luminosity at 8 TeV, reaching 8 sigma only for 14 TeV for 50 fb(-1). In spite of the much larger branching ratio (BR) of the Z into a b (b) over bar channel compared to the dilepton case, the former channel, can provide useful reach up to Br-inv greater than or similar to 0.75, only for the higher luminosity (300 fb(-1)) option using both jet-substructure and jet clustering methods. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, an input receiver with a hysteresis characteristic that can work at voltage levels between 0.9 V and 5 V is proposed. The input receiver can be used as a wide voltage range Schmitt trigger also. At the same time, reliable circuit operation is ensured. According to the research findings, this is the first time a wide voltage range Schmitt trigger is being reported. The proposed circuit is compared with previously reported input receivers, and it is shown that the circuit has better noise immunity. The proposed input receiver ends the need for a separate Schmitt trigger and input buffer. The frequency of operation is also higher than that of the previously reported receiver. The circuit is simulated using HSPICE at 035-mu m standard thin oxide technology. Monte Carlo analysis is conducted at different process conditions, showing that the proposed circuit works well for different process conditions at different voltage levels of operation. A noise impulse of (V-CC/2) magnitude is added to the input voltage to show that the receiver receives the correct logic level even in the presence of noise. Here, V-CC is the fixed voltage supply of 3.3 V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The amplitude-modulation (AM) and phase-modulation (PM) of an amplitude-modulated frequency-modulated (AM-FM) signal are defined as the modulus and phase angle, respectively, of the analytic signal (AS). The FM is defined as the derivative of the PM. However, this standard definition results in a PM with jump discontinuities in cases when the AM index exceeds unity, resulting in an FM that contains impulses. We propose a new approach to define smooth AM, PM, and FM for the AS, where the PM is computed as the solution to an optimization problem based on a vector interpretation of the AS. Our approach is directly linked to the fractional Hilbert transform (FrHT) and leads to an eigenvalue problem. The resulting PM and AM are shown to be smooth, and in particular, the AM turns out to be bipolar. We show an equivalence of the eigenvalue formulation to the square of the AS, and arrive at a simple method to compute the smooth PM. Some examples on synthesized and real signals are provided to validate the theoretical calculations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of postdetection integration (PDI) techniques for the detection of Global Navigation Satellite Systems (GNSS) signals in the presence of uncertainties in frequency offsets, noise variance, and unknown data-bits is studied. It is shown that the conventional PDI techniques are generally not robust to uncertainty in the data-bits and/or the noise variance. Two new modified PDI techniques are proposed, and they are shown to be robust to these uncertainties. The receiver operating characteristics (ROC) and sample complexity performance of the PDI techniques in the presence of model uncertainties are analytically derived. It is shown that the proposed methods significantly outperform existing methods, and hence they could become increasingly important as the GNSS receivers attempt to push the envelope on the minimum signal-to-noise ratio (SNR) for reliable detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Four-dimensional fluorescence microscopy-which records 3D image information as a function of time-provides an unbiased way of tracking dynamic behavior of subcellular components in living samples and capturing key events in complex macromolecular processes. Unfortunately, the combination of phototoxicity and photobleaching can severely limit the density or duration of sampling, thereby limiting the biological information that can be obtained. Although widefield microscopy provides a very light-efficient way of imaging, obtaining high-quality reconstructions requires deconvolution to remove optical aberrations. Unfortunately, most deconvolution methods perform very poorly at low signal-to-noise ratios, thereby requiring moderate photon doses to obtain acceptable resolution. We present a unique deconvolution method that combines an entropy-based regularization function with kernels that can exploit general spatial characteristics of the fluorescence image to push the required dose to extreme low levels, resulting in an enabling technology for high-resolution in vivo biological imaging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a nonlinear suboptimal detector whose performance in heavy-tailed noise is significantly better than that of the matched filter is proposed. The detector consists of a nonlinear wavelet denoising filter to enhance the signal-to-noise ratio, followed by a replica correlator. Performance of the detector is investigated through an asymptotic theoretical analysis as well as Monte Carlo simulations. The proposed detector offers the following advantages over the optimal (in the Neyman-Pearson sense) detector: it is easier to implement, and it is more robust with respect to error in modeling the probability distribution of noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 mu m. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 mu m mark. (C) 2013 AIP Publishing LLC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a strong relation between sparse signal recovery and error control coding. It is known that burst errors are block sparse in nature. So, here we attempt to solve burst error correction problem using block sparse signal recovery methods. We construct partial Fourier based encoding and decoding matrices using results on difference sets. These constructions offer guaranteed and efficient error correction when used in conjunction with reconstruction algorithms which exploit block sparsity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analytic signal (AS) was proposed by Gabor as a complex signal corresponding to a given real signal. The AS has a one-sided spectrum and gives rise to meaningful spectral averages. The Hilbert transform (HT) is a key component in Gabor's AS construction. We generalize the construction methodology by employing the fractional Hilbert transform (FrHT), without going through the standard fractional Fourier transform (FrFT) route. We discuss some properties of the fractional Hilbert operator and show how decomposition of the operator in terms of the identity and the standard Hilbert operators enables the construction of a family of analytic signals. We show that these analytic signals also satisfy Bedrosian-type properties and that their time-frequency localization properties are unaltered. We also propose a generalized-phase AS (GPAS) using a generalized-phase Hilbert transform (GPHT). We show that the GPHT shares many properties of the FrHT, in particular, selective highlighting of singularities, and a connection with Lie groups. We also investigate the duality between analyticity and causality concepts to arrive at a representation of causal signals in terms of the FrHT and GPHT. On the application front, we develop a secure multi-key single-sideband (SSB) modulation scheme and analyze its performance in noise and sensitivity to security key perturbations. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider signal detection in nt × nr underdetermined MIMO (UD-MIMO) systems, where i) nt >; nr with a overload factor α = nt over nr >; 1, ii) nt symbols are transmitted per channel use through spatial multiplexing, and iii) nt, nr are large (in the range of tens). A low-complexity detection algorithm based on reactive tabu search is considered. A variable threshold based stopping criterion is proposed which offers near-optimal performance in large UD-MIMO systems at low complexities. A lower bound on the maximum likelihood (ML) bit error performance of large UD-MIMO systems is also obtained for comparison. The proposed algorithm is shown to achieve BER performance close to the ML lower bound within 0.6 dB at an uncoded BER of 10-2 in 16 × 8 V-BLAST UD-MIMO system with 4-QAM (32 bps/Hz). Similar near-ML performance results are shown for 32 × 16, 32 × 24 V-BLAST UD-MIMO with 4-QAM/16-QAM as well. A performance and complexity comparison between the proposed algorithm and the λ-generalized sphere decoder (λ-GSD) algorithm for UD-MIMO shows that the proposed algorithm achieves almost the same performance of λ-GSD but at a significantly lesser complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sparse recovery methods utilize the l(p)-normbased regularization in the estimation problem with 0 <= p <= 1. These methods have a better utility when the number of independent measurements are limited in nature, which is a typical case for diffuse optical tomographic image reconstruction problem. These sparse recovery methods, along with an approximation to utilize the l(0)-norm, have been deployed for the reconstruction of diffuse optical images. Their performancewas compared systematically using both numerical and gelatin phantom cases to show that these methods hold promise in improving the reconstructed image quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Narrowband spectrograms of voiced speech can be modeled as an outcome of two-dimensional (2-D) modulation process. In this paper, we develop a demodulation algorithm to estimate the 2-D amplitude modulation (AM) and carrier of a given spectrogram patch. The demodulation algorithm is based on the Riesz transform, which is a unitary, shift-invariant operator and is obtained as a 2-D extension of the well known 1-D Hilbert transform operator. Existing methods for spectrogram demodulation rely on extension of sinusoidal demodulation method from the communications literature and require precise estimate of the 2-D carrier. On the other hand, the proposed method based on Riesz transform does not require a carrier estimate. The proposed method and the sinusoidal demodulation scheme are tested on real speech data. Experimental results show that the demodulated AM and carrier from Riesz demodulation represent the spectrogram patch more accurately compared with those obtained using the sinusoidal demodulation. The signal-to-reconstruction error ratio was found to be about 2 to 6 dB higher in case of the proposed demodulation approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel Projection Error Propagation-based Regularization (PEPR) method is proposed to improve the image quality in Electrical Impedance Tomography (EIT). PEPR method defines the regularization parameter as a function of the projection error developed by difference between experimental measurements and calculated data. The regularization parameter in the reconstruction algorithm gets modified automatically according to the noise level in measured data and ill-posedness of the Hessian matrix. Resistivity imaging of practical phantoms in a Model Based Iterative Image Reconstruction (MoBIIR) algorithm as well as with Electrical Impedance Diffuse Optical Reconstruction Software (EIDORS) with PEPR. The effect of PEPR method is also studied with phantoms with different configurations and with different current injection methods. All the resistivity images reconstructed with PEPR method are compared with the single step regularization (STR) and Modified Levenberg Regularization (LMR) techniques. The results show that, the PEPR technique reduces the projection error and solution error in each iterations both for simulated and experimental data in both the algorithms and improves the reconstructed images with better contrast to noise ratio (CNR), percentage of contrast recovery (PCR), coefficient of contrast (COC) and diametric resistivity profile (DRP). (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spatial resolution in photoacoustic and thermoacoustic tomography is ultrasound transducer (detector) bandwidth limited. For a circular scanning geometry the axial (radial) resolution is not affected by the detector aperture, but the tangential (lateral) resolution is highly dependent on the aperture size, and it is also spatially varying (depending on the location relative to the scanning center). Several approaches have been reported to counter this problem by physically attaching a negative acoustic lens in front of the nonfocused transducer or by using virtual point detectors. Here, we have implemented a modified delay-and-sum reconstruction method, which takes into account the large aperture of the detector, leading to more than fivefold improvement in the tangential resolution in photoacoustic (and thermoacoustic) tomography. Three different types of numerical phantoms were used to validate our reconstruction method. It is also shown that we were able to preserve the shape of the reconstructed objects with the modified algorithm. (C) 2014 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Large Hadron Collider (LHC) has completed its run at 8 TeV with the experiments ATLAS and CMS having collected about 25 fb(-1) of data each. Discovery of a light Higgs boson coupled with lack of evidence for supersymmetry at the LHC so far, has motivated studies of supersymmetry in the context of naturalness with the principal focus being the third generation squarks. In this work, we analyze the prospects of the flavor violating decay mode (t) over tilde (1) -> c chi(0)(1) at 8 and 13 TeV center-of-mass energy at the LHC. This channel is also relevant in the dark matter context for the stop-coannihilation scenario, where the relic density depends on the mass difference between the lighter stop quark ((t) over tilde (1)) and the lightest neutralino (chi(0)(1)) states. This channel is extremely challenging to probe, especially for situations when the mass difference between the lighter stop quark and the lightest neutralino is small. Using certain kinematical properties of signal events we find that the level of backgrounds can be reduced substantially. We find that the prospect for this channel is limited due to the low production cross section for top squarks and limited luminosity at 8 TeV, but at the 13 TeV LHC with 100 fb(-1) luminosity, it is possible to probe top squarks with masses up to similar to 450 GeV. We also discuss how the sensitivity could be significantly improved by tagging charm jets.