948 resultados para Low signal-to-noise ratio regime


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-haul high speed optical transmission systems are significantly distorted by the interplay between the electronic chromatic dispersion (CD) equalization and the local oscillator (LO) laser phase noise, which leads to an effect of equalization enhanced phase noise (EEPN). The EEPN degrades the performance of optical communication systems severely with the increment of fiber dispersion, LO laser linewidth, symbol rate, and modulation format. In this paper, we present an analytical model for evaluating the performance of bit-error-rate (BER) versus signal-to-noise ratio (SNR) in the n-level phase shift keying (n-PSK) coherent transmission system employing differential carrier phase estimation (CPE), where the influence of EEPN is considered. Theoretical results based on this model have been investigated for the differential quadrature phase shift keying (DQPSK), the differential 8-PSK (D8PSK), and the differential 16-PSK (D16PSK) coherent transmission systems. The influence of EEPN on the BER performance in term of the fiber dispersion, the LO phase noise, the symbol rate, and the modulation format are analyzed in detail. The BER behaviors based on this analytical model achieve a good agreement with previously reported BER floors influenced by EEPN. Further simulations have also been carried out in the differential CPE considering EEPN. The results indicate that this analytical model can give an accurate prediction for the DQPSK system, and a leading-order approximation for the D8PSK and the D16PSK systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relatively high phase noise of coherent optical systems poses unique challenges for forward error correction (FEC). In this letter, we propose a novel semianalytical method for selecting combinations of interleaver lengths and binary Bose-Chaudhuri-Hocquenghem (BCH) codes that meet a target post-FEC bit error rate (BER). Our method requires only short pre-FEC simulations, based on which we design interleavers and codes analytically. It is applicable to pre-FEC BER ∼10-3, and any post-FEC BER. In addition, we show that there is a tradeoff between code overhead and interleaver delay. Finally, for a target of 10-5, numerical simulations show that interleaver-code combinations selected using our method have post-FEC BER around 2× target. The target BER is achieved with 0.1 dB extra signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forward error correction (FEC) plays a vital role in coherent optical systems employing multi-level modulation. However, much of coding theory assumes that additive white Gaussian noise (AWGN) is dominant, whereas coherent optical systems have significant phase noise (PN) in addition to AWGN. This changes the error statistics and impacts FEC performance. In this paper, we propose a novel semianalytical method for dimensioning binary Bose-Chaudhuri-Hocquenghem (BCH) codes for systems with PN. Our method involves extracting statistics from pre-FEC bit error rate (BER) simulations. We use these statistics to parameterize a bivariate binomial model that describes the distribution of bit errors. In this way, we relate pre-FEC statistics to post-FEC BER and BCH codes. Our method is applicable to pre-FEC BER around 10-3 and any post-FEC BER. Using numerical simulations, we evaluate the accuracy of our approach for a target post-FEC BER of 10-5. Codes dimensioned with our bivariate binomial model meet the target within 0.2-dB signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. The purpose of this study is to examine the benefit of adding mfVEP hemifield Intersector analysis protocol to the standard HFA test when there is suspicious glaucomatous visual field loss. 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2, optical coherence tomography of the optic nerve head, and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. The retinal nerve fibre (RNFL) thickness was recorded to identify subjects with suspicious RNFL loss. The hemifield Intersector analysis of mfVEP results showed that signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the 3 groups (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 in glaucoma suspect group (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). Sensitivity and specificity of the HSA protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. The use of SAP and mfVEP results in subjects with suspicious glaucomatous visual field defects, identified by low RNFL thickness, is beneficial in confirming early visual field defects. The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol in addition to SAP analysis can provide information about focal visual field differences across the horizontal midline, and confirm suspicious field defects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss. The Intersector analysis protocol can detect early field changes not detected by standard HFA test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advanced signal processing, such as multi-channel digital back propagation and mid span optical phase conjugation, can compensate for inter channel nonlinear effects in point to point links. However, once such are effects are compensated, the interaction between the signal and noise fields becomes dominant. We will show that this interaction has a direct impact on the signal to noise ratio improvement, observing that ideal optical phase conjugation offers 1.5 dB more performance benefit than DSP based compensation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parameter design is an experimental design and analysis methodology for developing robust processes and products. Robustness implies insensitivity to noise disturbances. Subtle experimental realities, such as the joint effect of process knowledge and analysis methodology, may affect the effectiveness of parameter design in precision engineering; where the objective is to detect minute variation in product and process performance. In this thesis, approaches to statistical forced-noise design and analysis methodologies were investigated with respect to detecting performance variations. Given a low degree of process knowledge, Taguchi's methodology of signal-to-noise ratio analysis was found to be more suitable in detecting minute performance variations than the classical approach based on polynomial decomposition. Comparison of inner-array noise (IAN) and outer-array noise (OAN) structuring approaches showed that OAN is a more efficient design for precision engineering. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to incorporate advanced engineering tools in biology, biochemistry and medicine is in great demand. Many of the existing instruments and tools are usually expensive and require special facilities.^ With the advent of nanotechnology in the past decade, new approaches to develop devices and tools have been generated by academia and industry. ^ One such technology, NMR spectroscopy, has been used by biochemists for more than 2 decades to study the molecular structure of chemical compounds. However, NMR spectrometers are very expensive and require special laboratory rooms for their proper operation. High magnetic fields with strengths in the order of several Tesla make these instruments unaffordable to most research groups.^ This doctoral research proposes a new technology to develop NMR spectrometers that can operate at field strengths of less than 0.5 Tesla using an inexpensive permanent magnet and spin dependent nanoscale magnetic devices. This portable NMR system is intended to analyze samples as small as a few nanoliters.^ The main problem to resolve when downscaling the variables is to obtain an NMR signal with high Signal-To-Noise-Ratio (SNR). A special Tunneling Magneto-Resistive (TMR) sensor design was developed to achieve this goal. The minimum specifications for each component of the proposed NMR system were established. A complete NMR system was designed based on these minimum requirements. The goat was always to find cost effective realistic components. The novel design of the NMR system uses technologies such as Direct Digital Synthesis (DDS), Digital Signal Processing (DSP) and a special Backpropagation Neural Network that finds the best match of the NMR spectrum. The system was designed, calculated and simulated with excellent results.^ In addition, a general method to design TMR Sensors was developed. The technique was automated and a computer program was written to help the designer perform this task interactively.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work looks at the effect on mid-gap interface state defect density estimates for In0.53Ga0.47As semiconductor capacitors when different AC voltage amplitudes are selected for a fixed voltage bias step size (100 mV) during room temperature only electrical characterization. Results are presented for Au/Ni/Al2O3/In0.53Ga0.47As/InP metal–oxide–semiconductor capacitors with (1) n-type and p-type semiconductors, (2) different Al2O3 thicknesses, (3) different In0.53Ga0.47As surface passivation concentrations of ammonium sulphide, and (4) different transfer times to the atomic layer deposition chamber after passivation treatment on the semiconductor surface—thereby demonstrating a cross-section of device characteristics. The authors set out to determine the importance of the AC voltage amplitude selection on the interface state defect density extractions and whether this selection has a combined effect with the oxide capacitance. These capacitors are prototypical of the type of gate oxide material stacks that could form equivalent metal–oxide–semiconductor field-effect transistors beyond the 32 nm technology node. The authors do not attempt to achieve the best scaled equivalent oxide thickness in this work, as our focus is on accurately extracting device properties that will allow the investigation and reduction of interface state defect densities at the high-k/III–V semiconductor interface. The operating voltage for future devices will be reduced, potentially leading to an associated reduction in the AC voltage amplitude, which will force a decrease in the signal-to-noise ratio of electrical responses and could therefore result in less accurate impedance measurements. A concern thus arises regarding the accuracy of the electrical property extractions using such impedance measurements for future devices, particularly in relation to the mid-gap interface state defect density estimated from the conductance method and from the combined high–low frequency capacitance–voltage method. The authors apply a fixed voltage step of 100 mV for all voltage sweep measurements at each AC frequency. Each of these measurements is repeated 15 times for the equidistant AC voltage amplitudes between 10 mV and 150 mV. This provides the desired AC voltage amplitude to step size ratios from 1:10 to 3:2. Our results indicate that, although the selection of the oxide capacitance is important both to the success and accuracy of the extraction method, the mid-gap interface state defect density extractions are not overly sensitive to the AC voltage amplitude employed regardless of what oxide capacitance is used in the extractions, particularly in the range from 50% below the voltage sweep step size to 50% above it. Therefore, the use of larger AC voltage amplitudes in this range to achieve a better signal-to-noise ratio during impedance measurements for future low operating voltage devices will not distort the extracted interface state defect density.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers a wirelessly powered wiretap channel, where an energy constrained multi-antenna information source, powered by a dedicated power beacon, communicates with a legitimate user in the presence of a passive eavesdropper. Based on a simple time-switching protocol where power transfer and information transmission are separated in time, we investigate two popular multi-antenna transmission schemes at the information source, namely maximum ratio transmission (MRT) and transmit antenna selection (TAS). Closed-form expressions are derived for the achievable secrecy outage probability and average secrecy rate for both schemes. In addition, simple approximations are obtained at the high signal-to-noise ratio (SNR) regime. Our results demonstrate that by exploiting the full knowledge of channel state information (CSI), we can achieve a better secrecy performance, e.g., with full CSI of the main channel, the system can achieve substantial secrecy diversity gain. On the other hand, without the CSI of the main channel, no diversity gain can be attained. Moreover, we show that the additional level of randomness induced by wireless power transfer does not affect the secrecy performance in the high SNR regime. Finally, our theoretical claims are validated by the numerical results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an analytical performance investigation of both beamforming (BF) and interference cancellation (IC) strategies for a device-to-device (D2D) communication system underlaying a cellular network with an M-antenna base station (BS). We first derive new closed-form expressions for the ergodic achievable rate for BF and IC precoding strategies with quantized channel state information (CSI), as well as, perfect CSI. Then, novel lower and upper bounds are derived which apply for an arbitrary number of antennas and are shown to be sufficiently tight to the Monte-Carlo results. Based on these results, we examine in detail three important special cases including: high signal-to-noise ratio (SNR), weak interference between cellular link and D2D link, and BS equipped with a large number of antennas. We also derive asymptotic expressions for the ergodic achievable rate for these scenarios. Based on these results, we obtain valuable insights into the impact of the system parameters, such as the number of antennas, SNR and the interference for each link. In particular, we show that an irreducible saturation point exists in the high SNR regime, while the ergodic rate under IC strategy is verified to be always better than that under BF strategy. We also reveal that the ergodic achievable rate under perfect CSI scales as log2M, whilst it reaches a ceiling with quantized CSI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photometry of moving sources typically suffers from a reduced signal-to-noise ratio (S/N) or flux measurements biased to incorrect low values through the use of circular apertures. To address this issue, we present the software package, TRIPPy: TRailed Image Photometry in Python. TRIPPy introduces the pill aperture, which is the natural extension of the circular aperture appropriate for linearly trailed sources. The pill shape is a rectangle with two semicircular end-caps and is described by three parameters, the trail length and angle, and the radius. The TRIPPy software package also includes a new technique to generate accurate model point-spread functions (PSFs) and trailed PSFs (TSFs) from stationary background sources in sidereally tracked images. The TSF is merely the convolution of the model PSF, which consists of a moffat profile, and super-sampled lookup table. From the TSF, accurate pill aperture corrections can be estimated as a function of pill radius with an accuracy of 10 mmag for highly trailed sources. Analogous to the use of small circular apertures and associated aperture corrections, small radius pill apertures can be used to preserve S/Ns of low flux sources, with appropriate aperture correction applied to provide an accurate, unbiased flux measurement at all S/Ns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Small particles and their dynamics are of widespread interest due both to their unique properties and their ubiquity. Here, we investigate several classes of small particles: colloids, polymers, and liposomes. All these particles, due to their size on the order of microns, exhibit significant similarity in that they are large enough to be visualized in microscopes, but small enough to be significantly influenced by thermal (or Brownian) motion. Further, similar optical microscopy and experimental techniques are commonly employed to investigate all these particles. In this work, we develop single particle tracking techniques, which allow thorough characterization of individual particle dynamics, observing many behaviors which would be overlooked by methods which time or ensemble average. The various particle systems are also similar in that frequently, the signal-to-noise ratio represented a significant concern. In many cases, development of image analysis and particle tracking methods optimized to low signal-to-noise was critical to performing experimental observations. The simplest particles studied, in terms of their interaction potentials, were chemically homogeneous (though optically anisotropic) hard-sphere colloids. Using these spheres, we explored the comparatively underdeveloped conjunction of translation and rotation and particle hydrodynamics. Developing off this, the dynamics of clusters of spherical colloids were investigated, exploring how shape anisotropy influences the translation and rotation respectively. Transitioning away from uniform hard-sphere potentials, the interactions of amphiphilic colloidal particles were explored, observing the effects of hydrophilic and hydrophobic interactions upon pattern assembly and inter-particle dynamics. Interaction potentials were altered in a different fashion by working with suspensions of liposomes, which, while homogeneous, introduce the possibility of deformation. Even further degrees of freedom were introduced by observing the interaction of particles and then polymers within polymer suspensions or along lipid tubules. Throughout, while examination of the trajectories revealed that while by some measures, the averaged behaviors accorded with expectation, often closer examination made possible by single particle tracking revealed novel and unexpected phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several studies have reported changes in spontaneous brain rhythms that could be used asclinical biomarkers or in the evaluation of neuropsychological and drug treatments in longitudinal studies using magnetoencephalography (MEG). There is an increasing necessity to use these measures in early diagnosis and pathology progression; however, there is a lack of studies addressing how reliable they are. Here, we provide the first test-retest reliability estimate of MEG power in resting-state at sensor and source space. In this study, we recorded 3 sessions of resting-state MEG activity from 24 healthy subjects with an interval of a week between each session. Power values were estimated at sensor and source space with beamforming for classical frequency bands: delta (2–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), low beta (13–20 Hz), high beta (20–30 Hz), and gamma (30–45 Hz). Then, test-retest reliability was evaluated using the intraclass correlation coefficient (ICC). We also evaluated the relation between source power and the within-subject variability. In general, ICC of theta, alpha, and low beta power was fairly high (ICC > 0.6) while in delta and gamma power was lower. In source space, fronto-posterior alpha, frontal beta, and medial temporal theta showed the most reliable profiles. Signal-to-noise ratio could be partially responsible for reliability as low signal intensity resulted inhigh within-subject variability, but also the inherent nature of some brain rhythms in resting-state might be driving these reliability patterns. In conclusion, our results described the reliability of MEG power estimates in each frequency band, which could be considered in disease characterization or clinical trials.