946 resultados para TO-NOISE RATIO
Resumo:
The relatively high phase noise of coherent optical systems poses unique challenges for forward error correction (FEC). In this letter, we propose a novel semianalytical method for selecting combinations of interleaver lengths and binary Bose-Chaudhuri-Hocquenghem (BCH) codes that meet a target post-FEC bit error rate (BER). Our method requires only short pre-FEC simulations, based on which we design interleavers and codes analytically. It is applicable to pre-FEC BER ∼10-3, and any post-FEC BER. In addition, we show that there is a tradeoff between code overhead and interleaver delay. Finally, for a target of 10-5, numerical simulations show that interleaver-code combinations selected using our method have post-FEC BER around 2× target. The target BER is achieved with 0.1 dB extra signal-to-noise ratio.
Resumo:
Forward error correction (FEC) plays a vital role in coherent optical systems employing multi-level modulation. However, much of coding theory assumes that additive white Gaussian noise (AWGN) is dominant, whereas coherent optical systems have significant phase noise (PN) in addition to AWGN. This changes the error statistics and impacts FEC performance. In this paper, we propose a novel semianalytical method for dimensioning binary Bose-Chaudhuri-Hocquenghem (BCH) codes for systems with PN. Our method involves extracting statistics from pre-FEC bit error rate (BER) simulations. We use these statistics to parameterize a bivariate binomial model that describes the distribution of bit errors. In this way, we relate pre-FEC statistics to post-FEC BER and BCH codes. Our method is applicable to pre-FEC BER around 10-3 and any post-FEC BER. Using numerical simulations, we evaluate the accuracy of our approach for a target post-FEC BER of 10-5. Codes dimensioned with our bivariate binomial model meet the target within 0.2-dB signal-to-noise ratio.
Resumo:
Advanced signal processing, such as multi-channel digital back propagation and mid span optical phase conjugation, can compensate for inter channel nonlinear effects in point to point links. However, once such are effects are compensated, the interaction between the signal and noise fields becomes dominant. We will show that this interaction has a direct impact on the signal to noise ratio improvement, observing that ideal optical phase conjugation offers 1.5 dB more performance benefit than DSP based compensation.
Resumo:
Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.
Resumo:
In this Letter, we theoretically and numerically analyze the performance of coherent optical transmission systems that deploy inline or transceiver based nonlinearity compensation techniques. For systems where signal-signal nonlinear interactions are fully compensated, we find that beyond the performance peak the signal-to-noise ratio degradation has a slope of 3 dBSNR/dBPower suggesting a quartic rather than quadratic dependence on signal power. This is directly related to the fact that signals in a given span will interact not only with linear amplified spontaneous emission noise, but also with the nonlinear four-wave mixing products generated from signal-noise interaction in previous (hitherto) uncompensated spans. The performance of optical systems employing different nonlinearity compensation schemes were numerically simulated and compared against analytical predictions, showing a good agreement within a 0.4 dB margin of error.
Resumo:
The need to incorporate advanced engineering tools in biology, biochemistry and medicine is in great demand. Many of the existing instruments and tools are usually expensive and require special facilities.^ With the advent of nanotechnology in the past decade, new approaches to develop devices and tools have been generated by academia and industry. ^ One such technology, NMR spectroscopy, has been used by biochemists for more than 2 decades to study the molecular structure of chemical compounds. However, NMR spectrometers are very expensive and require special laboratory rooms for their proper operation. High magnetic fields with strengths in the order of several Tesla make these instruments unaffordable to most research groups.^ This doctoral research proposes a new technology to develop NMR spectrometers that can operate at field strengths of less than 0.5 Tesla using an inexpensive permanent magnet and spin dependent nanoscale magnetic devices. This portable NMR system is intended to analyze samples as small as a few nanoliters.^ The main problem to resolve when downscaling the variables is to obtain an NMR signal with high Signal-To-Noise-Ratio (SNR). A special Tunneling Magneto-Resistive (TMR) sensor design was developed to achieve this goal. The minimum specifications for each component of the proposed NMR system were established. A complete NMR system was designed based on these minimum requirements. The goat was always to find cost effective realistic components. The novel design of the NMR system uses technologies such as Direct Digital Synthesis (DDS), Digital Signal Processing (DSP) and a special Backpropagation Neural Network that finds the best match of the NMR spectrum. The system was designed, calculated and simulated with excellent results.^ In addition, a general method to design TMR Sensors was developed. The technique was automated and a computer program was written to help the designer perform this task interactively.^
Resumo:
A report from the National Institutes of Health defines a disease biomarker as a “characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention.” Early diagnosis is a crucial factor for incurable disease such as cancer and Alzheimer’s disease (AD). During the last decade researchers have discovered that biochemical changes caused by a disease can be detected considerably earlier as compared to physical manifestations/symptoms. In this dissertation electrochemical detection was utilized as the detection strategy as it offers high sensitivity/specificity, ease of operation, and capability of miniaturization and multiplexed detection. Electrochemical detection of biological analytes is an established field, and has matured at a rapid pace during the last 50 years and adapted itself to advances in micro/nanofabrication procedures. Carbon fiber microelectrodes were utilized as the platform sensor due to their high signal to noise ratio, ease and low-cost of fabrication, biocompatibility, and active carbon surface which allows conjugation with biorecognition moieties. This dissertation specifically focuses on the detection of 3 extensively validated biomarkers for cancer and AD. Firstly, vascular endothelial growth factor (VEGF) a cancer biomarker was detected using a one-step, reagentless immunosensing strategy. The immunosensing strategy allowed a rapid and sensitive means of VEGF detection with a detection limit of about 38 pg/mL with a linear dynamic range of 0–100 pg/mL. Direct detection of AD-related biomarker amyloid beta (Aβ) was achieved by exploiting its inherent electroactivity. The quantification of the ratio of Aβ1-40/42 (or Aβ ratio) has been established as a reliable test to diagnose AD through human clinical trials. Triple barrel carbon fiber microelectrodes were used to simultaneously detect Aβ1-40 and Aβ1-42 in cerebrospinal fluid from rats within a detection range of 100nM to 1.2μM and 400nM to 1μM respectively. In addition, the release of DNA damage/repair biomarker 8-hydroxydeoxyguanine (8-OHdG) under the influence of reactive oxidative stress from single lung endothelial cell was monitored using an activated carbon fiber microelectrode. The sensor was used to test the influence of nicotine, which is one of the most biologically active chemicals present in cigarette smoke and smokeless tobacco.
Resumo:
The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
Background: Indices predictive of central obesity include waist circumference (WC) and waist-to-height ratio (WHtR). The aims of this study were 1) to establish a Colombian youth smoothed centile charts and LMS tables for WC and WHtR and 2) to evaluate the utility of these parameters as predictors of overweight and obesity. Method: A cross-sectional study whose sample population comprised 7954 healthy Colombian schoolchildren [boys n=3460 and girls n=4494, mean (standard deviation) age 12.8 (2.3) years old]. Weight, height, body mass index (BMI), WC and WHtR and its percentiles were calculated. Appropriate cut-offs point of WC and WHtR for overweight and obesity, as defined by the International Obesity Task Force (IOTF) definitions, were selected using receiver operating characteristic (ROC) analysis. The discriminating power of WC and WHtR was expressed as area under the curve (AUC). Results: Reference values for WC and WHtR are presented. Mean WC increased and WHtR decreased with age for both genders. We found a moderate positive correlation between WC and BMI (r= 0.756, P < 0.01) and WHtR and BMI (r= 0.604, P < 0.01). The ROC analysis showed a high discrimination power in the identification of overweight and obesity for both measures in our sample population. Overall, WHtR was slightly a better predictor for overweight/obesity (AUC 95% CI 0.868-0.916) than the WC (AUC 95% CI 0.862-0.904). Conclusion: This paper presents the first sex- and age-specific WC and WHtR percentiles for both measures among Colombian children and adolescents aged 9–17.9 years. By providing LMS tables for Latin-American people based on Colombian reference data, we hope to provide quantitative tools for the study of obesity and its comorbidities.
Resumo:
The validation of an analytical procedure must be certified through the determination of parameters known as figures of merit. For first order data, the acuracy, precision, robustness and bias is similar to the methods of univariate calibration. Linearity, sensitivity, signal to noise ratio, adjustment, selectivity and confidence intervals need different approaches, specific for multivariate data. Selectivity and signal to noise ratio are more critical and they only can be estimated by means of the calculation of the net analyte signal. In second order calibration, some differentes approaches are necessary due to data structure.
Resumo:
Ensaios de sondagem e caminhamento elétrico têm sido realizados globalmente na avaliação de uma grande variedade de problemas em exploração mineral e hidrogeologia, bem como na caracterização de locais com risco de contaminação. A despeito da utilidade de ensaios geoelétricos em estudos hidrogeológicos e na caracterização de depósitos de resíduos desativados, um problema na utilização de sondagens elétricas ocorre em conseqüência da falta de espaço para aquisição de dados nessas áreas. Em áreas urbanas, isto constitui uma limitação importante devido à existência de construções, o que muitas vezes interrompe o desenvolvimento de ensaios de campo com arranjos de comprimentos adequados. Embora investigações rasas tendam a ser eficazes em caracterização geoambiental, a estimativa de parâmetros-chave pode depender da investigação de porções mais profundas. Entretanto, uma profundidade de investigação ligeiramente maior é obtida pelo arranjo dipolo-dipolo em comparação aos arranjos Schlumberger e Wenner de mesmo comprimento. Essa característica do arranjo dipolo-dipolo pode ser observada em vários estudos, mas provavelmente devido à tradição e à relação sinal-ruído, a grande maioria das sondagens elétricas é feita com os arranjos Schlumberger e Wenner.O arranjo dipolo-dipolo é mais utilizado em ensaios 2D. Até onde se sabe, essa maior capacidade de investigação em profundidade do arranjo dipolo-dipolo 1D não foi explorada em estudos geoambientais e nem em estudos hidrogeológicos em áreas urbanas. Este trabalho apresenta resultados de ensaios de campo que ressaltam a utilidade de sondagens dipolo-dipolo na investigação de tais áreas.
Resumo:
The Steady-State Free Precession (SSFP) sequence has been widely used in low-field and low-resolution imaging NMR experiments to increase the signal-to-noise ratio (s/n) of the signals. Here, we analyzed the Scrambled Steady State - SSS and Unscrambled Steady State - USS sequences to suppress phase anomalies and sidebands of the 13C NMR spectrum acquired in the SSFP regime. The results showed that the application of the USS sequence allowed a uniform distribution of the time interval between pulses (Tp), in the established time range, allowing a greater suppression of phase anomalies and sidebands, when compared with the SSS sequence.
Resumo:
In this paper we discuss the use of photonic crystal fibers (PCFs) as discrete devices for simultaneous wideband dispersion compensation and Raman amplification. The performance of the PCFs in terms of gain, ripple, optical signal-to-noise ratio (OSNR) and required fiber length for complete dispersion compensation is compared with conventional dispersion compensating fibers (DCFs). The main goal is to determine the minimum PCF loss beyond which its performance surpasses a state-of-the-art DCF and justifies practical use in telecommunication systems. (C) 2009 Optical Society of America
Resumo:
The Community Climate Model (CCM3) from the National Center for Atmospheric Research (NCAR) is used to investigate the effect of the South Atlantic sea surface temperature (SST) anomalies on interannual to decadal variability of South American precipitation. Two ensembles composed of multidecadal simulations forced with monthly SST data from the Hadley Centre for the period 1949 to 2001 are analysed. A statistical treatment based on signal-to-noise ratio and Empirical Orthogonal Functions (EOF) is applied to the ensembles in order to reduce the internal variability among the integrations. The ensemble treatment shows a spatial and temporal dependence of reproducibility. High degree of reproducibility is found in the tropics while the extratropics is apparently less reproducible. Austral autumn (MAM) and spring (SON) precipitation appears to be more reproducible over the South America-South Atlantic region than the summer (DJF) and winter (JJA) rainfall. While the Inter-tropical Convergence Zone (ITCZ) region is dominated by external variance, the South Atlantic Convergence Zone (SACZ) over South America is predominantly determined by internal variance, which makes it a difficult phenomenon to predict. Alternatively, the SACZ over western South Atlantic appears to be more sensitive to the subtropical SST anomalies than over the continent. An attempt is made to separate the atmospheric response forced by the South Atlantic SST anomalies from that associated with the El Nino - Southern Oscillation (ENSO). Results show that both the South Atlantic and Pacific SSTs modulate the intensity and position of the SACZ during DJF. Particularly, the subtropical South Atlantic SSTs are more important than ENSO in determining the position of the SACZ over the southeast Brazilian coast during DJF. On the other hand, the ENSO signal seems to influence the intensity of the SACZ not only in DJF but especially its oceanic branch during MAM. Both local and remote influences, however, are confounded by the large internal variance in the region. During MAM and JJA, the South Atlantic SST anomalies affect the magnitude and the meridional displacement of the ITCZ. In JJA, the ENSO has relatively little influence on the interannual variability of the simulated rainfall. During SON, however, the ENSO seems to counteract the effect of the subtropical South Atlantic SST variations on convection over South America.