840 resultados para Signal-to-noise Ratio
Resumo:
We are addressing a new problem of improving automatic speech recognition performance, given multiple utterances of patterns from the same class. We have formulated the problem of jointly decoding K multiple patterns given a single Hidden Markov Model. It is shown that such a solution is possible by aligning the K patterns using the proposed Multi Pattern Dynamic Time Warping algorithm followed by the Constrained Multi Pattern Viterbi Algorithm The new formulation is tested in the context of speaker independent isolated word recognition for both clean and noisy patterns. When 10 percent of speech is affected by a burst noise at -5 dB Signal to Noise Ratio (local), it is shown that joint decoding using only two noisy patterns reduces the noisy speech recognition error rate to about 51 percent, when compared to the single pattern decoding using the Viterbi Algorithm. In contrast a simple maximization of individual pattern likelihoods, provides only about 7 percent reduction in error rate.
Resumo:
By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.
Resumo:
Differentiation of various types of soft tissues is of high importance in medical imaging, because changes in soft tissue structure are often associated with pathologies, such as cancer. However, the densities of different soft tissues may be very similar, making it difficult to distinguish them in absorption images. This is especially true when the consideration of patient dose limits the available signal-to-noise ratio. Refraction is more sensitive than absorption to changes in the density, and small angle x-ray scattering on the other hand contains information about the macromolecular structure of the tissues. Both of these can be used as potential sources of contrast when soft tissues are imaged, but little is known about the visibility of the signals in realistic imaging situations. In this work the visibility of small-angle scattering and refraction in the context of medical imaging has been studied using computational methods. The work focuses on the study of analyzer based imaging, where the information about the sample is recorded in the rocking curve of the analyzer crystal. Computational phantoms based on simple geometrical shapes with differing material properties are used. The objects have realistic dimensions and attenuation properties that could be encountered in real imaging situations. The scattering properties mimic various features of measured small-angle scattering curves. Ray-tracing methods are used to calculate the refraction and attenuation of the beam, and a scattering halo is accumulated, including the effect of multiple scattering. The changes in the shape of the rocking curve are analyzed with different methods, including diffraction enhanced imaging (DEI), extended DEI (E-DEI) and multiple image radiography (MIR). A wide angle DEI, called W-DEI, is introduced and its performance is compared with that of the established methods. The results indicate that the differences in scattered intensities from healthy and malignant breast tissues are distinguishable to some extent with reasonable dose. Especially the fraction of total scattering has large enough differences that it can serve as a useful source of contrast. The peaks related to the macromolecular structure come to angles that are rather large, and have intensities that are only a small fraction of the total scattered intensity. It is found that such peaks seem to have only limited usefulness in medical imaging. It is also found that W-DEI performs rather well when most of the intensity remains in the direct beam, indicating that dark field imaging methods may produce the best results when scattering is weak. Altogether, it is found that the analysis of scattered intensity is a viable option even in medical imaging where the patient dose is the limiting factor.
Resumo:
In this paper, new results and insights are derived for the performance of multiple-input, single-output systems with beamforming at the transmitter, when the channel state information is quantized and sent to the transmitter over a noisy feedback channel. It is assumed that there exists a per-antenna power constraint at the transmitter, hence, the equal gain transmission (EGT) beamforming vector is quantized and sent from the receiver to the transmitter. The loss in received signal-to-noise ratio (SNR) relative to perfect beamforming is analytically characterized, and it is shown that at high rates, the overall distortion can be expressed as the sum of the quantization-induced distortion and the channel error-induced distortion, and that the asymptotic performance depends on the error-rate behavior of the noisy feedback channel as the number of codepoints gets large. The optimum density of codepoints (also known as the point density) that minimizes the overall distortion subject to a boundedness constraint is shown to be the same as the point density for a noiseless feedback channel, i.e., the uniform density. The binary symmetric channel with random index assignment is a special case of the analysis, and it is shown that as the number of quantized bits gets large the distortion approaches the same as that obtained with random beamforming. The accuracy of the theoretical expressions obtained are verified through Monte Carlo simulations.
Resumo:
The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.
Resumo:
In many IEEE 802.11 WLAN deployments, wireless clients have a choice of access points (AP) to connect to. In current systems, clients associate with the access point with the strongest signal to noise ratio. However, such an association mechanism can lead to unequal load sharing, resulting in diminished system performance. In this paper, we first provide a numerical approach based on stochastic dynamic programming to find the optimal client-AP association algorithm for a small topology consisting of two access points. Using the value iteration algorithm, we determine the optimal association rule for the two-AP topology. Next, utilizing the insights obtained from the optimal association ride for the two-AP case, we propose a near-optimal heuristic that we call RAT. We test the efficacy of RAT by considering more realistic arrival patterns and a larger topology. Our results show that RAT performs very well in these scenarios as well. Moreover, RAT lends itself to a fairly simple implementation.
Resumo:
A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.
Resumo:
Molecular machinery on the micro-scale, believed to be the fundamental building blocks of life, involve forces of 1-100 pN and movements of nanometers to micrometers. Micromechanical single-molecule experiments seek to understand the physics of nucleic acids, molecular motors, and other biological systems through direct measurement of forces and displacements. Optical tweezers are a popular choice among several complementary techniques for sensitive force-spectroscopy in the field of single molecule biology. The main objective of this thesis was to design and construct an optical tweezers instrument capable of investigating the physics of molecular motors and mechanisms of protein/nucleic-acid interactions on the single-molecule level. A double-trap optical tweezers instrument incorporating acousto-optic trap-steering, two independent detection channels, and a real-time digital controller was built. A numerical simulation and a theoretical study was performed to assess the signal-to-noise ratio in a constant-force molecular motor stepping experiment. Real-time feedback control of optical tweezers was explored in three studies. Position-clamping was implemented and compared to theoretical models using both proportional and predictive control. A force-clamp was implemented and tested with a DNA-tether in presence of the enzyme lambda exonuclease. The results of the study indicate that the presented models describing signal-to-noise ratio in constant-force experiments and feedback control experiments in optical tweezers agree well with experimental data. The effective trap stiffness can be increased by an order of magnitude using the presented position-clamping method. The force-clamp can be used for constant-force experiments, and the results from a proof-of-principle experiment, in which the enzyme lambda exonuclease converts double-stranded DNA to single-stranded DNA, agree with previous research. The main objective of the thesis was thus achieved. The developed instrument and presented results on feedback control serve as a stepping stone for future contributions to the growing field of single molecule biology.
Resumo:
The use of energy harvesting (EH) nodes as cooperative relays is a promising and emerging solution in wireless systems such as wireless sensor networks. It harnesses the spatial diversity of a multi-relay network and addresses the vexing problem of a relay's batteries getting drained in forwarding information to the destination. We consider a cooperative system in which EH nodes volunteer to serve as amplify-and-forward relays whenever they have sufficient energy for transmission. For a general class of stationary and ergodic EH processes, we introduce the notion of energy constrained and energy unconstrained relays and analytically characterize the symbol error rate of the system. Further insight is gained by an asymptotic analysis that considers the cases where the signal-to-noise-ratio or the number of relays is large. Our analysis quantifies how the energy usage at an EH relay and, consequently, its availability for relaying, depends not only on the relay's energy harvesting process, but also on its transmit power setting and the other relays in the system. The optimal static transmit power setting at the EH relays is also determined. Altogether, our results demonstrate how a system that uses EH relays differs in significant ways from one that uses conventional cooperative relays.
Resumo:
For the specific case of binary stars, this paper presents signal-to-noise ratio (SNR) calculations for the detection of the parity (the side of the brighter component) of the binary using the double correlation method. This double correlation method is a focal plane version of the well-known Knox-Thompson method used in speckle interferometry. It is shown that SNR for parity detection using double correlation depends linearly on binary separation. This new result was entirely missed by previous analytical calculations dealing with a point source. It is concluded that, for magnitudes relevant to the present day speckle interferometry and for binary separations close to the diffraction limit, speckle masking has better SNR for parity detection.
Resumo:
We present observations of low-frequency recombination lines of carbon toward Cas A near 34.5 MHz (n similar to 575) using the Gauribidanur radio telescope and near 560 MHz (n similar to 225) and 770 MHz (n similar to 205) using the NRAO 140 foot (43 m) telescope in Greenbank. We also present high angular resolution (1') observations of the C270 alpha line near 332 MHz using the Very Large Array in B-configuration. A high signal-to-noise ratio spectrum is obtained at 34.5 MHz, which clearly shows a Voigt profile with distinct Lorentzian wings, resulting from significant pressure and radiation broadening at such high quantum numbers. The emission lines detected near 332, 550, and 770 MHz, on the other hand, are narrow and essentially Doppler-broadened. The measured Lorentzian width at 34.5 MHz constrains the allowed combinations of radiation temperature, electron density, and electron temperature in the line-forming region. Radiation broadening at 34.5 MHz places a lower limit of 115 pc on the separation between Cas A and the line-forming clouds. Modeling the variation in the integrated line-to-continuum ratio with frequency indicates that the region is likely to be associated with the cold atomic hydrogen component of the interstellar medium, and the physical properties of this region are likely to be T-e = 75 K, n(e) = 0.02 cm(-3), T-R100 = 3200 K, and n(H) T-e = 10,000 cm(-3) K. Comparison of the distribution of the C270 alpha recombination line emission across Cas A with that of (CO)-C-12 and H I also supports the above conclusion.
Resumo:
We report Doppler-only radar observations of Icarus at Goldstone at a transmitter frequency of 8510 MHz (3.5 cm wavelength) during 8-10 June 1996, the first radar detection of the object since 1968. Optimally filtered and folded spectra achieve a maximum opposite-circular (OC) polarization signal-to-noise ratio of about 10 and help to constrain Icarus' physical properties. We obtain an OC radar cross section of 0.05 km(2) (with a 35% uncertainty), which is less than values estimated by Goldstein (1969) and by Pettengill et al. (1969), and a circular polarization (SC/OC) ratio of 0.5+/-0.2. We analyze the echo power spectrum with a model incorporating the echo bandwidth B and a spectral shape parameter it, yielding a coupled constraint between B and n. We adopt 25 Hz as the lower bound on B, which gives a lower bound on the maximum pole-on breadth of about 0.6 km and upper bounds on the radar and optical albedos that are consistent with Icarus' tentative QS classification. The observed circular polarization ratio indicates a very rough near-surface at spatial scales of the order of the radar wavelength. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
We develop an optimal, distributed, and low feedback timer-based selection scheme to enable next generation rate-adaptive wireless systems to exploit multi-user diversity. In our scheme, each user sets a timer depending on its signal to noise ratio (SNR) and transmits a small packet to identify itself when its timer expires. When the SNR-to-timer mapping is monotone non-decreasing, timers of users with better SNRs expire earlier. Thus, the base station (BS) simply selects the first user whose timer expiry it can detect, and transmits data to it at as high a rate as reliably possible. However, timers that expire too close to one another cannot be detected by the BS due to collisions. We characterize in detail the structure of the SNR-to-timer mapping that optimally handles these collisions to maximize the average data rate. We prove that the optimal timer values take only a discrete set of values, and that the rate adaptation policy strongly influences the optimal scheme's structure. The optimal average rate is very close to that of ideal selection in which the BS always selects highest rate user, and is much higher than that of the popular, but ad hoc, timer schemes considered in the literature.
Resumo:
This paper compares and analyzes the performance of distributed cophasing techniques for uplink transmission over wireless sensor networks. We focus on a time-division duplexing approach, and exploit the channel reciprocity to reduce the channel feedback requirement. We consider periodic broadcast of known pilot symbols by the fusion center (FC), and maximum likelihood estimation of the channel by the sensor nodes for the subsequent uplink cophasing transmission. We assume carrier and phase synchronization across the participating nodes for analytical tractability. We study binary signaling over frequency-flat fading channels, and quantify the system performance such as the expected gains in the received signal-to-noise ratio (SNR) and the average probability of error at the FC, as a function of the number of sensor nodes and the pilot overhead. Our results show that a modest amount of accumulated pilot SNR is sufficient to realize a large fraction of the maximum possible beamforming gain. We also investigate the performance gains obtained by censoring transmission at the sensors based on the estimated channel state, and the benefits obtained by using maximum ratio transmission (MRT) and truncated channel inversion (TCI) at the sensors in addition to cophasing transmission. Simulation results corroborate the theoretical expressions and show the relative performance benefits offered by the various schemes.
Resumo:
We address the problem of local-polynomial modeling of smooth time-varying signals with unknown functional form, in the presence of additive noise. The problem formulation is in the time domain and the polynomial coefficients are estimated in the pointwise minimum mean square error (PMMSE) sense. The choice of the window length for local modeling introduces a bias-variance tradeoff, which we solve optimally by using the intersection-of-confidence-intervals (ICI) technique. The combination of the local polynomial model and the ICI technique gives rise to an adaptive signal model equipped with a time-varying PMMSE-optimal window length whose performance is superior to that obtained by using a fixed window length. We also evaluate the sensitivity of the ICI technique with respect to the confidence interval width. Simulation results on electrocardiogram (ECG) signals show that at 0dB signal-to-noise ratio (SNR), one can achieve about 12dB improvement in SNR. Monte-Carlo performance analysis shows that the performance is comparable to the basic wavelet techniques. For 0 dB SNR, the adaptive window technique yields about 2-3dB higher SNR than wavelet regression techniques and for SNRs greater than 12dB, the wavelet techniques yield about 2dB higher SNR.