527 resultados para SNR maximisation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Joint decoding of multiple speech patterns so as to improve speech recognition performance is important, especially in the presence of noise. In this paper, we propose a Multi-Pattern Viterbi algorithm (MPVA) to jointly decode and recognize multiple speech patterns for automatic speech recognition (ASR). The MPVA is a generalization of the Viterbi Algorithm to jointly decode multiple patterns given a Hidden Markov Model (HMM). Unlike the previously proposed two stage Constrained Multi-Pattern Viterbi Algorithm (CMPVA),the MPVA is a single stage algorithm. MPVA has the advantage that it cart be extended to connected word recognition (CWR) and continuous speech recognition (CSR) problems. MPVA is shown to provide better speech recognition performance than the earlier techniques: using only two repetitions of noisy speech patterns (-5 dB SNR, 10% burst noise), the word error rate using MPVA decreased by 28.5%, when compared to using individual decoding. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Molecular machinery on the micro-scale, believed to be the fundamental building blocks of life, involve forces of 1-100 pN and movements of nanometers to micrometers. Micromechanical single-molecule experiments seek to understand the physics of nucleic acids, molecular motors, and other biological systems through direct measurement of forces and displacements. Optical tweezers are a popular choice among several complementary techniques for sensitive force-spectroscopy in the field of single molecule biology. The main objective of this thesis was to design and construct an optical tweezers instrument capable of investigating the physics of molecular motors and mechanisms of protein/nucleic-acid interactions on the single-molecule level. A double-trap optical tweezers instrument incorporating acousto-optic trap-steering, two independent detection channels, and a real-time digital controller was built. A numerical simulation and a theoretical study was performed to assess the signal-to-noise ratio in a constant-force molecular motor stepping experiment. Real-time feedback control of optical tweezers was explored in three studies. Position-clamping was implemented and compared to theoretical models using both proportional and predictive control. A force-clamp was implemented and tested with a DNA-tether in presence of the enzyme lambda exonuclease. The results of the study indicate that the presented models describing signal-to-noise ratio in constant-force experiments and feedback control experiments in optical tweezers agree well with experimental data. The effective trap stiffness can be increased by an order of magnitude using the presented position-clamping method. The force-clamp can be used for constant-force experiments, and the results from a proof-of-principle experiment, in which the enzyme lambda exonuclease converts double-stranded DNA to single-stranded DNA, agree with previous research. The main objective of the thesis was thus achieved. The developed instrument and presented results on feedback control serve as a stepping stone for future contributions to the growing field of single molecule biology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to describe the atmospheric turbulence which limits the resolution of long-exposure images obtained using ground-based large telescopes, a simplified model of a speckle pattern, reducing the complexity of calculating field-correlations of very high order, is presented. Focal plane correlations are used instead of correlations in the spatial frequency domain. General tripple correlations for a point source and for a binary are calculated and it is shown that they are not a strong function of the binary separation. For binary separations close to the diffraction limit of the telescope, the genuine triple correlation technique ensures a better SNR than the near-axis Knox-Thompson technique. The simplifications allow a complete analysis of the noise properties at all levels of light.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the specific case of binary stars, this paper presents signal-to-noise ratio (SNR) calculations for the detection of the parity (the side of the brighter component) of the binary using the double correlation method. This double correlation method is a focal plane version of the well-known Knox-Thompson method used in speckle interferometry. It is shown that SNR for parity detection using double correlation depends linearly on binary separation. This new result was entirely missed by previous analytical calculations dealing with a point source. It is concluded that, for magnitudes relevant to the present day speckle interferometry and for binary separations close to the diffraction limit, speckle masking has better SNR for parity detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reaction of the bicyclic phosphazane N5P4Et5Cl2 with 2,6-dimethylphenol and subsequent oxidation of the product by aqueous hydrogen peroxide yields N5P4Et5O4(OC6H3Me2-2,6)2 in 85% yield. Its structure has been established by NMR spectroscopy and single-crystal X-ray diffraction. The compound crystallises in the monoclinic space group C2/c with a= 21.245(5), b= 10.879(2), c= 16.450(6)Å, ?= 123.94(2)°, Z= 4, R= 0.066. The structural features are compared with those of bicyclic ?5-phosphazenes of type N5P4R3(NR1R2)5(NHR3)(R1,R3= Me or Et, R2= H or Me). The observed conformation of the N3P3 rings in the present compound is mainly dictated by the maximisation of the stabilising influence of �negative hyperconjugative interactions� between the nitrogen lone pairs and the adjacent P�X ?* orbitals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SAW matched filter is commonly used in spread spectrum communication receivers in order to maximize the SNR prior to detection, At times the receiver would be a mobile one while the signal is processed at the IF level, In that case frequency deviations due to Doppler shift or temperature dependence of the acoustic medium used for SAW device would, severely effect it's performance, The impact of these errors on the receiver performance is analyzed on a generalised basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the main disturbances in EEG signals is EMG artefacts generated by muscle movements. In the paper, the use of a linear phase FIR digital low-pass filter with finite wordlength precision coefficients is proposed, designed using the compensation procedure, to minimise EMG artefacts in contaminated EEG signals. To make the filtering more effective, different structures are used, i.e. cascading, twicing and sharpening (apart from simple low-pass filtering) of the designed FIR filter Modifications are proposed to twicing and sharpening structures to regain the linear phase characteristics that are lost in conventional twicing and sharpening operations. The efficacy of all these transformed filters in minimising EMG artefacts is studied, using SNR improvements as a performance measure for simulated signals. Time plots of the signals are also compared. Studies show that the modified sharpening structure is superior in performance to all other proposed methods. These algorithms have also been applied to real or recorded EMG-contaminated EEG signal. Comparison of time plots, and also the output SNR, show that the proposed modified sharpened structure works better in minimising EMG artefacts compared with other methods considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The amount of data contained in electroencephalogram (EEG) recordings is quite massive and this places constraints on bandwidth and storage. The requirement of online transmission of data needs a scheme that allows higher performance with lower computation. Single channel algorithms, when applied on multichannel EEG data fail to meet this requirement. While there have been many methods proposed for multichannel ECG compression, not much work appears to have been done in the area of multichannel EEG. compression. In this paper, we present an EEG compression algorithm based on a multichannel model, which gives higher performance compared to other algorithms. Simulations have been performed on both normal and pathological EEG data and it is observed that a high compression ratio with very large SNR is obtained in both cases. The reconstructed signals are found to match the original signals very closely, thus confirming that diagnostic information is being preserved during transmission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High sensitivity detection techniques are required for indoor navigation using Global Navigation Satellite System (GNSS) receivers, and typically, a combination of coherent and non- coherent integration is used as the test statistic for detection. The coherent integration exploits the deterministic part of the signal and is limited due to the residual frequency error, navigation data bits and user dynamics, which are not known apriori. So, non- coherent integration, which involves squaring of the coherent integration output, is used to improve the detection sensitivity. Due to this squaring, it is robust against the artifacts introduced due to data bits and/or frequency error. However, it is susceptible to uncertainty in the noise variance, and this can lead to fundamental sensitivity limits in detecting weak signals. In this work, the performance of the conventional non-coherent integration-based GNSS signal detection is studied in the presence of noise uncertainty. It is shown that the performance of the current state of the art GNSS receivers is close to the theoretical SNR limit for reliable detection at moderate levels of noise uncertainty. Alternate robust post-coherent detectors are also analyzed, and are shown to alleviate the noise uncertainty problem. Monte-Carlo simulations are used to confirm the theoretical predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is on the design and performance analysis of practical distributed space-time codes for wireless relay networks with multiple antennas terminals. The amplify-andforward scheme is used in a way that each relay transmits a scaled version of the linear combination of the received symbols. We propose distributed generalized quasi-orthogonal space-time codes which are distributed among the source antennas and relays, and valid for any number of relays. Assuming M-PSK and M-QAM signals, we derive a formula for the symbol error probability of the investigated scheme over Rayleigh fading channels. For sufficiently large SNR, this paper derives closed-form average SER expression. The simplicity of the asymptotic results provides valuable insights into the performance of cooperative networks and suggests means of optimizing them. Our analytical results have been confirmed by simulation results, using full-rate full-diversity distributed codes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop an optimal, distributed, and low feedback timer-based selection scheme to enable next generation rate-adaptive wireless systems to exploit multi-user diversity. In our scheme, each user sets a timer depending on its signal to noise ratio (SNR) and transmits a small packet to identify itself when its timer expires. When the SNR-to-timer mapping is monotone non-decreasing, timers of users with better SNRs expire earlier. Thus, the base station (BS) simply selects the first user whose timer expiry it can detect, and transmits data to it at as high a rate as reliably possible. However, timers that expire too close to one another cannot be detected by the BS due to collisions. We characterize in detail the structure of the SNR-to-timer mapping that optimally handles these collisions to maximize the average data rate. We prove that the optimal timer values take only a discrete set of values, and that the rate adaptation policy strongly influences the optimal scheme's structure. The optimal average rate is very close to that of ideal selection in which the BS always selects highest rate user, and is much higher than that of the popular, but ad hoc, timer schemes considered in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have imaged the H92alpha and H75alpha radio recombination line (RRL) emissions from the starburst galaxy NGC 253 with a resolution of similar to4 pc. The peak of the RRL emission at both frequencies coincides with the unresolved radio nucleus. Both lines observed toward the nucleus are extremely wide, with FWHMs of similar to200 km s(-1). Modeling the RRL and radio continuum data for the radio nucleus shows that the lines arise in gas whose density is similar to10(4) cm(-3) and mass is a few thousand M., which requires an ionizing flux of (6-20) x 10(51) photons s(-1). We consider a supernova remnant (SNR) expanding in a dense medium, a star cluster, and also an active galactic nucleus (AGN) as potential ionizing sources. Based on dynamical arguments, we rule out an SNR as a viable ionizing source. A star cluster model is considered, and the dynamics of the ionized gas in a stellar-wind driven structure are investigated. Such a model is only consistent with the properties of the ionized gas for a cluster younger than similar to10(5) yr. The existence of such a young cluster at the nucleus seems improbable. The third model assumes the ionizing source to be an AGN at the nucleus. In this model, it is shown that the observed X-ray flux is too weak to account for the required ionizing photon flux. However, the ionization requirement can be explained if the accretion disk is assumed to have a big blue bump in its spectrum. Hence, we favor an AGN at the nucleus as the source responsible for ionizing the observed RRLs. A hybrid model consisting of an inner advection-dominated accretion flow disk and an outer thin disk is suggested, which could explain the radio, UV, and X-ray luminosities of the nucleus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive the computational cutoff rate, R-o, for coherent trellis-coded modulation (TCM) schemes on independent indentically distributed (i.i.d.) Rayleigh fading channels with (K, L) generalized selection combining (GSC) diversity, which combines the K paths with the largest instantaneous signal-to-noise ratios (SNRs) among the L available diversity paths. The cutoff rate is shown to be a simple function of the moment generating function (MGF) of the SNR at the output of the (K, L) GSC receiver. We also derive the union bound on the bit error probability of TCM schemes with (K, L) GSC in the form of a simple, finite integral. The effectiveness of this bound is verified through simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper compares and analyzes the performance of distributed cophasing techniques for uplink transmission over wireless sensor networks. We focus on a time-division duplexing approach, and exploit the channel reciprocity to reduce the channel feedback requirement. We consider periodic broadcast of known pilot symbols by the fusion center (FC), and maximum likelihood estimation of the channel by the sensor nodes for the subsequent uplink cophasing transmission. We assume carrier and phase synchronization across the participating nodes for analytical tractability. We study binary signaling over frequency-flat fading channels, and quantify the system performance such as the expected gains in the received signal-to-noise ratio (SNR) and the average probability of error at the FC, as a function of the number of sensor nodes and the pilot overhead. Our results show that a modest amount of accumulated pilot SNR is sufficient to realize a large fraction of the maximum possible beamforming gain. We also investigate the performance gains obtained by censoring transmission at the sensors based on the estimated channel state, and the benefits obtained by using maximum ratio transmission (MRT) and truncated channel inversion (TCI) at the sensors in addition to cophasing transmission. Simulation results corroborate the theoretical expressions and show the relative performance benefits offered by the various schemes.