870 resultados para Contrast-to-noise ratio


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data are presented for a pH-adjustable liquid UV-matrix-assisted laser desorption ionization (MALDI) matrix for mass spectrometry analysis. The liquid matrix system possesses high analytical sensitivity within the same order of magnitude as that achievable by the commonly used solid UV-MALDI matrices such as 2,5-dihydroxybenzoic acid but with improved spot homogeneity and reproducibility. The pH of the matrix has been adjusted by the addition of up to 0.35% trifluoroacetic acid and up to 200 mM ammonium bicarbonate, achieving an on-target pH range of 3.5-8.6. Alteration of the pH does not seem to affect the overall sample signal intensity or signal-to-noise ratio achievable, nor does it affect the individual peptide ion signals from a mixture of peptides with varying isoelectric points (p1). In addition, the pH adjustment has allowed for the performance of a tryptic digest within the diluted pH-optimized liquid matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to study the impact of channel state information on the design of cooperative transmission protocols. This is motivated by the fact that the performance gain achieved by cooperative diversity comes at the price of the extra bandwidth resource consumption. Several opportunistic relaying strategies are developed to fully utilize the different types of a priori channel information. The information-theoretic measures such as outage probability and diversity-multiplexing tradeoff are developed for the proposed protocols. The analytical and numerical results demonstrate that the use of such a priori information increases the spectral efficiency of cooperative diversity, especially at low signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes the use of a quadratic programming optimization procedure for designing asymmetric apodization windows to de-noise THz transient interferograms and compares these results to those obtained when wavelet signal processing algorithms are adopted. A systems identification technique in the wavelet domain is also proposed for the estimation of the complex insertion loss function. The proposed techniques can enhance the frequency dependent dynamic range of an experiment and should be of particular interest to the THz imaging and tomography community. Future advances in THz sources and detectors are likely to increase the signal-to-noise ratio of the recorded THz transients and high quality apodization techniques will become more important, and may set the limit on the achievable accuracy of the deduced spectrum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to study the impact of channel state information on the design of cooperative transmission protocols. This is motivated by the fact that the performance gain achieved by cooperative diversity comes at the price of the extra bandwidth resource consumption. Several opportunistic relaying strategies are developed to fully utilize the different types of a priori channel information. The analytical and numerical results demonstrate that the use of such a priori information increases the spectral efficiency of cooperative diversity, especially at low signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method of estimating dissipation rates from a vertically pointing Doppler lidar with high temporal and spatial resolution has been evaluated by comparison with independent measurements derived from a balloon-borne sonic anemometer. This method utilizes the variance of the mean Doppler velocity from a number of sequential samples and requires an estimate of the horizontal wind speed. The noise contribution to the variance can be estimated from the observed signal-to-noise ratio and removed where appropriate. The relative size of the noise variance to the observed variance provides a measure of the confidence in the retrieval. Comparison with in situ dissipation rates derived from the balloon-borne sonic anemometer reveal that this particular Doppler lidar is capable of retrieving dissipation rates over a range of at least three orders of magnitude. This method is most suitable for retrieval of dissipation rates within the convective well-mixed boundary layer where the scales of motion that the Doppler lidar probes remain well within the inertial subrange. Caution must be applied when estimating dissipation rates in more quiescent conditions. For the particular Doppler lidar described here, the selection of suitably short integration times will permit this method to be applicable in such situations but at the expense of accuracy in the Doppler velocity estimates. The two case studies presented here suggest that, with profiles every 4 s, reliable estimates of ϵ can be derived to within at least an order of magnitude throughout almost all of the lowest 2 km and, in the convective boundary layer, to within 50%. Increasing the integration time for individual profiles to 30 s can improve the accuracy substantially but potentially confines retrievals to within the convective boundary layer. Therefore, optimization of certain instrument parameters may be required for specific implementations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports on the design and manufacture of an ultra-wide (5-30µm) infrared edge filter for use in FTIR studies of the low frequency vibrational modes of metallo-proteins. We present details of the spectral design and manufacture of such a filter which meets the demanding bandwidth and transparency requirements of the application, and spectra that present the new data possible with such a filter. A design model of the filter and the materials used in its construction has been developed capable of accurately predicting spectral performance at both 300K and at the reduced operating temperature at 200K. This design model is based on the optical and semiconductor properties of a multilayer filter containing PbTe (IV-VI) layer material in combination with the dielectric dispersion of ZnSe (II-VI) deposited on a CdTe (II-VI) substrate together with the use of BaF2 (II-VII) as an antireflection layer. Comparisons between the computed spectral performance of the model and spectral measurements from manufactured coatings over a wavelength range of 4-30µm and temperature range 300-200K are presented. Finally we present the results of the FTIR measurements of Photosystem II showing the improvement in signal to noise ratio of the measurement due to using the filter, together with a light induced FTIR difference spectrum of Photosystem II.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peak picking is an early key step in MS data analysis. We compare three commonly used approaches to peak picking and discuss their merits by means of statistical analysis. Methods investigated encompass signal-to-noise ratio, continuous wavelet transform, and a correlation-based approach using a Gaussian template. Functionality of the three methods is illustrated and discussed in a practical context using a mass spectral data set created with MALDI-TOF technology. Sensitivity and specificity are investigated using a manually defined reference set of peaks. As an additional criterion, the robustness of the three methods is assessed by a perturbation analysis and illustrated using ROC curves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One major assumption in all orthogonal space-time block coding (O-STBC) schemes is that the channel remains static over the length of the code word. However, time-selective fading channels do exist, and in such case conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. As a sequel to the authors' previous papers on this subject, this paper aims to eliminate the error floor of the H(i)-coded O-STBC system (i = 3 and 4) by employing the techniques of: 1) zero forcing (ZF) and 2) parallel interference cancellation (PIC). It is. shown that for an H(i)-coded system the PIC is a much better choice than the ZF in terms of both performance and computational complexity. Compared with the, conventional H(i) detector, the PIC detector incurs a moderately higher computational complexity, but this can well be justified by the enormous improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One major assumption in all orthogonal space-time block coding (O-STBC) schemes is that the channel remains static over the entire length of the codeword. However, time selective fading channels do exist, and in such case the conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. This paper addresses such an issue by introducing a parallel interference cancellation (PIC) based detector for the Gi coded systems (i=3 and 4).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

All the orthogonal space-time block coding (O-STBC) schemes are based on the following assumption: the channel remains static over the entire length of the codeword. However, time selective fading channels do exist, and in many cases the conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. This paper addresses such an issue by introducing a parallel interference cancellation (PIC) based detector for the Gi coded systems (i=3 and 4).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new satellite mission to deliver high quality measurements of upper air water vapour. The concept centres around a LiDAR in limb sounding by occultation geometry, designed to operate as a very long path system for differential absorption measurements. We present a preliminary performance analysis with a system sized to send 75 mJ pulses at 25 Hz at four wavelengths close to 935 nm, to up to 5 microsatellites in a counter-rotating orbit, carrying retroreflectors characterized by a reflected beam divergence of roughly twice the emitted laser beam divergence of 15 µrad. This provides water vapour profiles with a vertical sampling of 110 m; preliminary calculations suggest that the system could detect concentrations of less than 5 ppm. A secondary payload of a fairly conventional medium resolution multispectral radiometer allows wide-swath cloud and aerosol imaging. The total weight and power of the system are estimated at 3 tons and 2,700 W respectively. This novel concept presents significant challenges, including the performance of the lasers in space, the tracking between the main spacecraft and the retroreflectors, the refractive effects of turbulence, and the design of the telescopes to achieve a high signal-to-noise ratio for the high precision measurements. The mission concept was conceived at the Alpbach Summer School 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, we consider beamforming strategies in amplified-and-forward (AF) two-way relay channels, where two terminals and the relay are equipped with multiple antennas. Our aim is to optimize the worse end-to-end signal-to-noise ratio of the two links so that the reliability of both terminals can be guaranteed. We show that the optimization problem can be recast as a generalized fractional programing and be solved by using the Dinkelbach-type procedure combined with semidefinite programming. Simulation results confirm the efficiency of the proposed strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The atmospheric response to the evolution of the global sea surface temperatures from 1979 to 1992 is studied using the Max-Planck-Institut 19 level atmospheric general circulation model, ECHAM3 at T 42 resolution. Five separate 14-year integrations are performed and results are presented for each individual realization and for the ensemble-averaged response. The results are compared to a 30-year control integration using a climate monthly mean state of the sea surface temperatures and to analysis data. It is found that the ECHAM3 model, by and large, does reproduce the observed response pattern to El Nin˜o and La Nin˜a. During the El Nin˜ o events, the subtropical jet streams in both hemispheres are intensified and displaced equatorward, and there is a tendency towards weak upper easterlies over the equator. The Southern Oscillation is a very stable feature of the integrations and is accurately reproduced in all experiments. The inter-annual variability at middle- and high-latitudes, on the other hand, is strongly dominated by chaotic dynamics, and the tropical SST forcing only modulates the atmospheric circulation. The potential predictability of the model is investigated for six different regions. Signal to noise ratio is large in most parts of the tropical belt, of medium strength in the western hemisphere and generally small over the European area. The ENSO signal is most pronounced during the boreal spring. A particularly strong signal in the precipitation field in the extratropics during spring can be found over the southern United States. Western Canada is normally warmer during the warm ENSO phase, while northern Europe is warmer than normal during the ENSO cold phase. The reason is advection of warm air due to a more intense Pacific low than normal during the warm ENSO phase and a more intense Icelandic low than normal during the cold ENSO phase, respectively.