948 resultados para Frequency Modulated Signals, Parameter Estimation, Signal-to-Noise-Ratio, Simulations
Resumo:
We perform optimisation of bi-directionally pumped dispersion compensating Raman amplifier modules. Optimal forward and backward pump powers for basic configurations using different commercially available fibers are presented for both single- and multi-channel systems. Optical signal-to-noise ratio improvement of up to 8 dB is achieved as a result of optimisation. © 2003 Published by Elsevier B.V.
Resumo:
The relatively high phase noise of coherent optical systems poses unique challenges for forward error correction (FEC). In this letter, we propose a novel semianalytical method for selecting combinations of interleaver lengths and binary Bose-Chaudhuri-Hocquenghem (BCH) codes that meet a target post-FEC bit error rate (BER). Our method requires only short pre-FEC simulations, based on which we design interleavers and codes analytically. It is applicable to pre-FEC BER ∼10-3, and any post-FEC BER. In addition, we show that there is a tradeoff between code overhead and interleaver delay. Finally, for a target of 10-5, numerical simulations show that interleaver-code combinations selected using our method have post-FEC BER around 2× target. The target BER is achieved with 0.1 dB extra signal-to-noise ratio.
Resumo:
Forward error correction (FEC) plays a vital role in coherent optical systems employing multi-level modulation. However, much of coding theory assumes that additive white Gaussian noise (AWGN) is dominant, whereas coherent optical systems have significant phase noise (PN) in addition to AWGN. This changes the error statistics and impacts FEC performance. In this paper, we propose a novel semianalytical method for dimensioning binary Bose-Chaudhuri-Hocquenghem (BCH) codes for systems with PN. Our method involves extracting statistics from pre-FEC bit error rate (BER) simulations. We use these statistics to parameterize a bivariate binomial model that describes the distribution of bit errors. In this way, we relate pre-FEC statistics to post-FEC BER and BCH codes. Our method is applicable to pre-FEC BER around 10-3 and any post-FEC BER. Using numerical simulations, we evaluate the accuracy of our approach for a target post-FEC BER of 10-5. Codes dimensioned with our bivariate binomial model meet the target within 0.2-dB signal-to-noise ratio.
Resumo:
Advanced signal processing, such as multi-channel digital back propagation and mid span optical phase conjugation, can compensate for inter channel nonlinear effects in point to point links. However, once such are effects are compensated, the interaction between the signal and noise fields becomes dominant. We will show that this interaction has a direct impact on the signal to noise ratio improvement, observing that ideal optical phase conjugation offers 1.5 dB more performance benefit than DSP based compensation.
Resumo:
Video streaming via Transmission Control Protocol (TCP) networks has become a popular and highly demanded service, but its quality assessment in both objective and subjective terms has not been properly addressed. In this paper, based on statistical analysis a full analytic model of a no-reference objective metric, namely pause intensity (PI), for video quality assessment is presented. The model characterizes the video playout buffer behavior in connection with the network performance (throughput) and the video playout rate. This allows for instant quality measurement and control without requiring a reference video. PI specifically addresses the need for assessing the quality issue in terms of the continuity in the playout of TCP streaming videos, which cannot be properly measured by other objective metrics such as peak signal-to-noise-ratio, structural similarity, and buffer underrun or pause frequency. The performance of the analytical model is rigidly verified by simulation results and subjective tests using a range of video clips. It is demonstrated that PI is closely correlated with viewers' opinion scores regardless of the vastly different composition of individual elements, such as pause duration and pause frequency which jointly constitute this new quality metric. It is also shown that the correlation performance of PI is consistent and content independent. © 2013 IEEE.
Resumo:
Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.
Resumo:
The need to incorporate advanced engineering tools in biology, biochemistry and medicine is in great demand. Many of the existing instruments and tools are usually expensive and require special facilities.^ With the advent of nanotechnology in the past decade, new approaches to develop devices and tools have been generated by academia and industry. ^ One such technology, NMR spectroscopy, has been used by biochemists for more than 2 decades to study the molecular structure of chemical compounds. However, NMR spectrometers are very expensive and require special laboratory rooms for their proper operation. High magnetic fields with strengths in the order of several Tesla make these instruments unaffordable to most research groups.^ This doctoral research proposes a new technology to develop NMR spectrometers that can operate at field strengths of less than 0.5 Tesla using an inexpensive permanent magnet and spin dependent nanoscale magnetic devices. This portable NMR system is intended to analyze samples as small as a few nanoliters.^ The main problem to resolve when downscaling the variables is to obtain an NMR signal with high Signal-To-Noise-Ratio (SNR). A special Tunneling Magneto-Resistive (TMR) sensor design was developed to achieve this goal. The minimum specifications for each component of the proposed NMR system were established. A complete NMR system was designed based on these minimum requirements. The goat was always to find cost effective realistic components. The novel design of the NMR system uses technologies such as Direct Digital Synthesis (DDS), Digital Signal Processing (DSP) and a special Backpropagation Neural Network that finds the best match of the NMR spectrum. The system was designed, calculated and simulated with excellent results.^ In addition, a general method to design TMR Sensors was developed. The technique was automated and a computer program was written to help the designer perform this task interactively.^
Resumo:
A report from the National Institutes of Health defines a disease biomarker as a “characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention.” Early diagnosis is a crucial factor for incurable disease such as cancer and Alzheimer’s disease (AD). During the last decade researchers have discovered that biochemical changes caused by a disease can be detected considerably earlier as compared to physical manifestations/symptoms. In this dissertation electrochemical detection was utilized as the detection strategy as it offers high sensitivity/specificity, ease of operation, and capability of miniaturization and multiplexed detection. Electrochemical detection of biological analytes is an established field, and has matured at a rapid pace during the last 50 years and adapted itself to advances in micro/nanofabrication procedures. Carbon fiber microelectrodes were utilized as the platform sensor due to their high signal to noise ratio, ease and low-cost of fabrication, biocompatibility, and active carbon surface which allows conjugation with biorecognition moieties. This dissertation specifically focuses on the detection of 3 extensively validated biomarkers for cancer and AD. Firstly, vascular endothelial growth factor (VEGF) a cancer biomarker was detected using a one-step, reagentless immunosensing strategy. The immunosensing strategy allowed a rapid and sensitive means of VEGF detection with a detection limit of about 38 pg/mL with a linear dynamic range of 0–100 pg/mL. Direct detection of AD-related biomarker amyloid beta (Aβ) was achieved by exploiting its inherent electroactivity. The quantification of the ratio of Aβ1-40/42 (or Aβ ratio) has been established as a reliable test to diagnose AD through human clinical trials. Triple barrel carbon fiber microelectrodes were used to simultaneously detect Aβ1-40 and Aβ1-42 in cerebrospinal fluid from rats within a detection range of 100nM to 1.2μM and 400nM to 1μM respectively. In addition, the release of DNA damage/repair biomarker 8-hydroxydeoxyguanine (8-OHdG) under the influence of reactive oxidative stress from single lung endothelial cell was monitored using an activated carbon fiber microelectrode. The sensor was used to test the influence of nicotine, which is one of the most biologically active chemicals present in cigarette smoke and smokeless tobacco.
Resumo:
The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.
Resumo:
This work looks at the effect on mid-gap interface state defect density estimates for In0.53Ga0.47As semiconductor capacitors when different AC voltage amplitudes are selected for a fixed voltage bias step size (100 mV) during room temperature only electrical characterization. Results are presented for Au/Ni/Al2O3/In0.53Ga0.47As/InP metal–oxide–semiconductor capacitors with (1) n-type and p-type semiconductors, (2) different Al2O3 thicknesses, (3) different In0.53Ga0.47As surface passivation concentrations of ammonium sulphide, and (4) different transfer times to the atomic layer deposition chamber after passivation treatment on the semiconductor surface—thereby demonstrating a cross-section of device characteristics. The authors set out to determine the importance of the AC voltage amplitude selection on the interface state defect density extractions and whether this selection has a combined effect with the oxide capacitance. These capacitors are prototypical of the type of gate oxide material stacks that could form equivalent metal–oxide–semiconductor field-effect transistors beyond the 32 nm technology node. The authors do not attempt to achieve the best scaled equivalent oxide thickness in this work, as our focus is on accurately extracting device properties that will allow the investigation and reduction of interface state defect densities at the high-k/III–V semiconductor interface. The operating voltage for future devices will be reduced, potentially leading to an associated reduction in the AC voltage amplitude, which will force a decrease in the signal-to-noise ratio of electrical responses and could therefore result in less accurate impedance measurements. A concern thus arises regarding the accuracy of the electrical property extractions using such impedance measurements for future devices, particularly in relation to the mid-gap interface state defect density estimated from the conductance method and from the combined high–low frequency capacitance–voltage method. The authors apply a fixed voltage step of 100 mV for all voltage sweep measurements at each AC frequency. Each of these measurements is repeated 15 times for the equidistant AC voltage amplitudes between 10 mV and 150 mV. This provides the desired AC voltage amplitude to step size ratios from 1:10 to 3:2. Our results indicate that, although the selection of the oxide capacitance is important both to the success and accuracy of the extraction method, the mid-gap interface state defect density extractions are not overly sensitive to the AC voltage amplitude employed regardless of what oxide capacitance is used in the extractions, particularly in the range from 50% below the voltage sweep step size to 50% above it. Therefore, the use of larger AC voltage amplitudes in this range to achieve a better signal-to-noise ratio during impedance measurements for future low operating voltage devices will not distort the extracted interface state defect density.
Resumo:
We propose cyclic prefix single carrier full-duplex transmission in amplify-and-forward cooperative spectrum sharing networks to achieve multipath diversity and full-duplex spectral efficiency. Integrating full-duplex transmission into cooperative spectrum sharing systems results in two intrinsic problems: 1) the residual loop interference occurs between the transmit and the receive antennas at the secondary relays and 2) the primary users simultaneously suffer interference from the secondary source (SS) and the secondary relays (SRs). Thus, examining the effects of residual loop interference under peak interference power constraint at the primary users and maximum transmit power constraints at the SS and the SRs is a particularly challenging problem in frequency selective fading channels. To do so, we derive and quantitatively compare the lower bounds on the outage probability and the corresponding asymptotic outage probability for max–min relay selection, partial relay selection, and maximum interference relay selection policies in frequency selective fading channels. To facilitate comparison, we provide the corresponding analysis for half-duplex. Our results show two complementary regions, named as the signal-to-noise ratio (SNR) dominant region and the residual loop interference dominant region, where the multipath diversity and spatial diversity can be achievable only in the SNR dominant region, however the diversity gain collapses to zero in the residual loop interference dominant region.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.
Resumo:
The major drawback of Ka band, operating frequency of the AltiKa altimeter on board SARAL, is its sensitivity to atmospheric liquid water. Even light rain or heavy clouds can strongly attenuate the signal and distort the signal leading to erroneous geophysical parameters estimates. A good detection of the samples affected by atmospheric liquid water is crucial. As AltiKa operates at a single frequency, a new technique based on the detection by a Matching Pursuit algorithm of short scale variations of the slope of the echo waveform plateau has been developed and implemented prelaunch in the ground segment. As the parameterization of the detection algorithm was defined using Jason-1 data, the parameters were re-estimated during the cal-val phase, during which the algorithm was also updated. The measured sensor signal-to-noise ratio is significantly better than planned, the data loss due to attenuation by rain is significantly smaller than expected (<0.1%). For cycles 2 to 9, the flag detects about 9% of 1Hz data, 5.5% as rainy and 3.5 % as backscatter bloom (or sigma0 bloom). The results of the flagging process are compared to independent rain data from microwave radiometers to evaluate its performances in term of detection and false alarms.
Resumo:
Several studies have reported changes in spontaneous brain rhythms that could be used asclinical biomarkers or in the evaluation of neuropsychological and drug treatments in longitudinal studies using magnetoencephalography (MEG). There is an increasing necessity to use these measures in early diagnosis and pathology progression; however, there is a lack of studies addressing how reliable they are. Here, we provide the first test-retest reliability estimate of MEG power in resting-state at sensor and source space. In this study, we recorded 3 sessions of resting-state MEG activity from 24 healthy subjects with an interval of a week between each session. Power values were estimated at sensor and source space with beamforming for classical frequency bands: delta (2–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), low beta (13–20 Hz), high beta (20–30 Hz), and gamma (30–45 Hz). Then, test-retest reliability was evaluated using the intraclass correlation coefficient (ICC). We also evaluated the relation between source power and the within-subject variability. In general, ICC of theta, alpha, and low beta power was fairly high (ICC > 0.6) while in delta and gamma power was lower. In source space, fronto-posterior alpha, frontal beta, and medial temporal theta showed the most reliable profiles. Signal-to-noise ratio could be partially responsible for reliability as low signal intensity resulted inhigh within-subject variability, but also the inherent nature of some brain rhythms in resting-state might be driving these reliability patterns. In conclusion, our results described the reliability of MEG power estimates in each frequency band, which could be considered in disease characterization or clinical trials.