43 resultados para Signal to noise ratio
Classification of lactose and mandelic acid THz spectra using subspace and wavelet-packet algorithms
Resumo:
This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.
Resumo:
The aim of this paper is to study the impact of channel state information on the design of cooperative transmission protocols. This is motivated by the fact that the performance gain achieved by cooperative diversity comes at the price of the extra bandwidth resource consumption. Several opportunistic relaying strategies are developed to fully utilize the different types of a priori channel information. The analytical and numerical results demonstrate that the use of such a priori information increases the spectral efficiency of cooperative diversity, especially at low signal-to-noise ratio.
Resumo:
A method of estimating dissipation rates from a vertically pointing Doppler lidar with high temporal and spatial resolution has been evaluated by comparison with independent measurements derived from a balloon-borne sonic anemometer. This method utilizes the variance of the mean Doppler velocity from a number of sequential samples and requires an estimate of the horizontal wind speed. The noise contribution to the variance can be estimated from the observed signal-to-noise ratio and removed where appropriate. The relative size of the noise variance to the observed variance provides a measure of the confidence in the retrieval. Comparison with in situ dissipation rates derived from the balloon-borne sonic anemometer reveal that this particular Doppler lidar is capable of retrieving dissipation rates over a range of at least three orders of magnitude. This method is most suitable for retrieval of dissipation rates within the convective well-mixed boundary layer where the scales of motion that the Doppler lidar probes remain well within the inertial subrange. Caution must be applied when estimating dissipation rates in more quiescent conditions. For the particular Doppler lidar described here, the selection of suitably short integration times will permit this method to be applicable in such situations but at the expense of accuracy in the Doppler velocity estimates. The two case studies presented here suggest that, with profiles every 4 s, reliable estimates of ϵ can be derived to within at least an order of magnitude throughout almost all of the lowest 2 km and, in the convective boundary layer, to within 50%. Increasing the integration time for individual profiles to 30 s can improve the accuracy substantially but potentially confines retrievals to within the convective boundary layer. Therefore, optimization of certain instrument parameters may be required for specific implementations.
Resumo:
This paper reports on the design and manufacture of an ultra-wide (5-30µm) infrared edge filter for use in FTIR studies of the low frequency vibrational modes of metallo-proteins. We present details of the spectral design and manufacture of such a filter which meets the demanding bandwidth and transparency requirements of the application, and spectra that present the new data possible with such a filter. A design model of the filter and the materials used in its construction has been developed capable of accurately predicting spectral performance at both 300K and at the reduced operating temperature at 200K. This design model is based on the optical and semiconductor properties of a multilayer filter containing PbTe (IV-VI) layer material in combination with the dielectric dispersion of ZnSe (II-VI) deposited on a CdTe (II-VI) substrate together with the use of BaF2 (II-VII) as an antireflection layer. Comparisons between the computed spectral performance of the model and spectral measurements from manufactured coatings over a wavelength range of 4-30µm and temperature range 300-200K are presented. Finally we present the results of the FTIR measurements of Photosystem II showing the improvement in signal to noise ratio of the measurement due to using the filter, together with a light induced FTIR difference spectrum of Photosystem II.
Resumo:
Peak picking is an early key step in MS data analysis. We compare three commonly used approaches to peak picking and discuss their merits by means of statistical analysis. Methods investigated encompass signal-to-noise ratio, continuous wavelet transform, and a correlation-based approach using a Gaussian template. Functionality of the three methods is illustrated and discussed in a practical context using a mass spectral data set created with MALDI-TOF technology. Sensitivity and specificity are investigated using a manually defined reference set of peaks. As an additional criterion, the robustness of the three methods is assessed by a perturbation analysis and illustrated using ROC curves.
Resumo:
All the orthogonal space-time block coding (O-STBC) schemes are based on the following assumption: the channel remains static over the entire length of the codeword. However, time selective fading channels do exist, and in many cases the conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. This paper addresses such an issue by introducing a parallel interference cancellation (PIC) based detector for the Gi coded systems (i=3 and 4).
Resumo:
We propose a new satellite mission to deliver high quality measurements of upper air water vapour. The concept centres around a LiDAR in limb sounding by occultation geometry, designed to operate as a very long path system for differential absorption measurements. We present a preliminary performance analysis with a system sized to send 75 mJ pulses at 25 Hz at four wavelengths close to 935 nm, to up to 5 microsatellites in a counter-rotating orbit, carrying retroreflectors characterized by a reflected beam divergence of roughly twice the emitted laser beam divergence of 15 µrad. This provides water vapour profiles with a vertical sampling of 110 m; preliminary calculations suggest that the system could detect concentrations of less than 5 ppm. A secondary payload of a fairly conventional medium resolution multispectral radiometer allows wide-swath cloud and aerosol imaging. The total weight and power of the system are estimated at 3 tons and 2,700 W respectively. This novel concept presents significant challenges, including the performance of the lasers in space, the tracking between the main spacecraft and the retroreflectors, the refractive effects of turbulence, and the design of the telescopes to achieve a high signal-to-noise ratio for the high precision measurements. The mission concept was conceived at the Alpbach Summer School 2010.
Resumo:
In this letter, we consider beamforming strategies in amplified-and-forward (AF) two-way relay channels, where two terminals and the relay are equipped with multiple antennas. Our aim is to optimize the worse end-to-end signal-to-noise ratio of the two links so that the reliability of both terminals can be guaranteed. We show that the optimization problem can be recast as a generalized fractional programing and be solved by using the Dinkelbach-type procedure combined with semidefinite programming. Simulation results confirm the efficiency of the proposed strategies.
Resumo:
The atmospheric response to the evolution of the global sea surface temperatures from 1979 to 1992 is studied using the Max-Planck-Institut 19 level atmospheric general circulation model, ECHAM3 at T 42 resolution. Five separate 14-year integrations are performed and results are presented for each individual realization and for the ensemble-averaged response. The results are compared to a 30-year control integration using a climate monthly mean state of the sea surface temperatures and to analysis data. It is found that the ECHAM3 model, by and large, does reproduce the observed response pattern to El Nin˜o and La Nin˜a. During the El Nin˜ o events, the subtropical jet streams in both hemispheres are intensified and displaced equatorward, and there is a tendency towards weak upper easterlies over the equator. The Southern Oscillation is a very stable feature of the integrations and is accurately reproduced in all experiments. The inter-annual variability at middle- and high-latitudes, on the other hand, is strongly dominated by chaotic dynamics, and the tropical SST forcing only modulates the atmospheric circulation. The potential predictability of the model is investigated for six different regions. Signal to noise ratio is large in most parts of the tropical belt, of medium strength in the western hemisphere and generally small over the European area. The ENSO signal is most pronounced during the boreal spring. A particularly strong signal in the precipitation field in the extratropics during spring can be found over the southern United States. Western Canada is normally warmer during the warm ENSO phase, while northern Europe is warmer than normal during the ENSO cold phase. The reason is advection of warm air due to a more intense Pacific low than normal during the warm ENSO phase and a more intense Icelandic low than normal during the cold ENSO phase, respectively.
Resumo:
A fingerprint method for detecting anthropogenic climate change is applied to new simulations with a coupled ocean-atmosphere general circulation model (CGCM) forced by increasing concentrations of greenhouse gases and aerosols covering the years 1880 to 2050. In addition to the anthropogenic climate change signal, the space-time structure of the natural climate variability for near-surface temperatures is estimated from instrumental data over the last 134 years and two 1000 year simulations with CGCMs. The estimates are compared with paleoclimate data over 570 years. The space-time information on both the signal and the noise is used to maximize the signal-to-noise ratio of a detection variable obtained by applying an optimal filter (fingerprint) to the observed data. The inclusion of aerosols slows the predicted future warming. The probability that the observed increase in near-surface temperatures in recent decades is of natural origin is estimated to be less than 5%. However, this number is dependent on the estimated natural variability level, which is still subject to some uncertainty.
Resumo:
In this paper, dual-hop amplify-and-forward (AF) cooperative systems in the presence of high-power amplifier (HPA) nonlinearity at semi-blind relays, are investigated. Based on the modified AF cooperative system model taking into account the HPA nonlinearity, the expression for the output signal-to-noise ratio (SNR) at the destination node is derived, where the interference due to both the AF relaying mechanism and the HPA nonlinearity is characterized. The performance of the AF cooperative system under study is evaluated in terms of average symbol error probability (SEP), which is derived using the moment-generating function (MGF) approach, considering transmissions over Nakagami-m fading channels. Numerical results are provided and show the effects of some system parameters, such as the HPA parameters, numbers of relays, quadrature amplitude modulation (QAM) order, Nakagami parameters, on performance.
Resumo:
An analysis method for diffusion tensor (DT) magnetic resonance imaging data is described, which, contrary to the standard method (multivariate fitting), does not require a specific functional model for diffusion-weighted (DW) signals. The method uses principal component analysis (PCA) under the assumption of a single fibre per pixel. PCA and the standard method were compared using simulations and human brain data. The two methods were equivalent in determining fibre orientation. PCA-derived fractional anisotropy and DT relative anisotropy had similar signal-to-noise ratio (SNR) and dependence on fibre shape. PCA-derived mean diffusivity had similar SNR to the respective DT scalar, and it depended on fibre anisotropy. Appropriate scaling of the PCA measures resulted in very good agreement between PCA and DT maps. In conclusion, the assumption of a specific functional model for DW signals is not necessary for characterization of anisotropic diffusion in a single fibre.
Resumo:
In this paper we present a compliant neural interface designed to record bladder afferent activity. We developed the implant's microfabrication process using multiple layers of silicone rubber and thin metal so that a gold microelectrode array is embedded within four parallel polydimethylsiloxane (PDMS) microchannels (5 mm long, 100 μm wide, 100 μm deep). Electrode impedance at 1 kHz was optimized using a reactive ion etching (RIE) step, which increased the porosity of the electrode surface. The electrodes did not deteriorate after a 3 month immersion in phosphate buffered saline (PBS) at 37 °C. Due to the unique microscopic topography of the metal film on PDMS, the electrodes are extremely compliant and can withstand handling during implantation (twisting and bending) without electrical failure. The device was transplanted acutely to anaesthetized rats, and strands of the dorsal branch of roots L6 and S1 were surgically teased and inserted in three microchannels under saline immersion to allow for simultaneous in vivo recordings in an acute setting. We utilized a tripole electrode configuration to maintain background noise low and improve the signal to noise ratio. The device could distinguish two types of afferent nerve activity related to increasing bladder filling and contraction. To our knowledge, this is the first report of multichannel recordings of bladder afferent activity.
Resumo:
Neuroprostheses interfaced with transected peripheral nerves are technological routes to control robotic limbs as well as convey sensory feedback to patients suffering from traumatic neural injuries or degenerative diseases. To maximize the wealth of data obtained in recordings, interfacing devices are required to have intrafascicular resolution and provide high signal-to-noise ratio (SNR) recordings. In this paper, we focus on a possible building block of a three-dimensional regenerative implant: a polydimethylsiloxane (PDMS) microchannel electrode capable of highly sensitive recordings in vivo. The PDMS 'micro-cuff' consists of a 3.5 mm long (100 µm × 70 µm cross section) microfluidic channel equipped with five evaporated Ti/Au/Ti electrodes of sub-100 nm thickness. Individual electrodes have average impedance of 640 ± 30 kΩ with a phase angle of −58 ± 1 degrees at 1 kHz and survive demanding mechanical handling such as twisting and bending. In proof-of-principle acute implantation experiments in rats, surgically teased afferent nerve strands from the L5 dorsal root were threaded through the microchannel. Tactile stimulation of the skin was reliably monitored with the three inner electrodes in the device, simultaneously recording signal amplitudes of up to 50 µV under saline immersion. The overall SNR was approximately 4. A small but consistent time lag between the signals arriving at the three electrodes was observed and yields a fibre conduction velocity of 30 m s−1. The fidelity of the recordings was verified by placing the same nerve strand in oil and recording activity with hook electrodes. Our results show that PDMS microchannel electrodes open a promising technological path to 3D regenerative interfaces.
Resumo:
Increasing concentrations of greenhouse gases in the atmosphere are expected to modify the global water cycle with significant consequences for terrestrial hydrology. We assess the impact of climate change on hydrological droughts in a multimodel experiment including seven global impact models (GIMs) driven by bias-corrected climate from five global climate models under four representative concentration pathways (RCPs). Drought severity is defined as the fraction of land under drought conditions. Results show a likely increase in the global severity of hydrological drought at the end of the 21st century, with systematically greater increases for RCPs describing stronger radiative forcings. Under RCP8.5, droughts exceeding 40% of analyzed land area are projected by nearly half of the simulations. This increase in drought severity has a strong signal-to-noise ratio at the global scale, and Southern Europe, the Middle East, the Southeast United States, Chile, and South West Australia are identified as possible hotspots for future water security issues. The uncertainty due to GIMs is greater than that from global climate models, particularly if including a GIM that accounts for the dynamic response of plants to CO2 and climate, as this model simulates little or no increase in drought frequency. Our study demonstrates that different representations of terrestrial water-cycle processes in GIMs are responsible for a much larger uncertainty in the response of hydrological drought to climate change than previously thought. When assessing the impact of climate change on hydrology, it is therefore critical to consider a diverse range of GIMs to better capture the uncertainty.