870 resultados para Contrast-to-noise ratio


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forward error correction (FEC) plays a vital role in coherent optical systems employing multi-level modulation. However, much of coding theory assumes that additive white Gaussian noise (AWGN) is dominant, whereas coherent optical systems have significant phase noise (PN) in addition to AWGN. This changes the error statistics and impacts FEC performance. In this paper, we propose a novel semianalytical method for dimensioning binary Bose-Chaudhuri-Hocquenghem (BCH) codes for systems with PN. Our method involves extracting statistics from pre-FEC bit error rate (BER) simulations. We use these statistics to parameterize a bivariate binomial model that describes the distribution of bit errors. In this way, we relate pre-FEC statistics to post-FEC BER and BCH codes. Our method is applicable to pre-FEC BER around 10-3 and any post-FEC BER. Using numerical simulations, we evaluate the accuracy of our approach for a target post-FEC BER of 10-5. Codes dimensioned with our bivariate binomial model meet the target within 0.2-dB signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advanced signal processing, such as multi-channel digital back propagation and mid span optical phase conjugation, can compensate for inter channel nonlinear effects in point to point links. However, once such are effects are compensated, the interaction between the signal and noise fields becomes dominant. We will show that this interaction has a direct impact on the signal to noise ratio improvement, observing that ideal optical phase conjugation offers 1.5 dB more performance benefit than DSP based compensation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Letter, we theoretically and numerically analyze the performance of coherent optical transmission systems that deploy inline or transceiver based nonlinearity compensation techniques. For systems where signal-signal nonlinear interactions are fully compensated, we find that beyond the performance peak the signal-to-noise ratio degradation has a slope of 3 dBSNR/dBPower suggesting a quartic rather than quadratic dependence on signal power. This is directly related to the fact that signals in a given span will interact not only with linear amplified spontaneous emission noise, but also with the nonlinear four-wave mixing products generated from signal-noise interaction in previous (hitherto) uncompensated spans. The performance of optical systems employing different nonlinearity compensation schemes were numerically simulated and compared against analytical predictions, showing a good agreement within a 0.4 dB margin of error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to incorporate advanced engineering tools in biology, biochemistry and medicine is in great demand. Many of the existing instruments and tools are usually expensive and require special facilities.^ With the advent of nanotechnology in the past decade, new approaches to develop devices and tools have been generated by academia and industry. ^ One such technology, NMR spectroscopy, has been used by biochemists for more than 2 decades to study the molecular structure of chemical compounds. However, NMR spectrometers are very expensive and require special laboratory rooms for their proper operation. High magnetic fields with strengths in the order of several Tesla make these instruments unaffordable to most research groups.^ This doctoral research proposes a new technology to develop NMR spectrometers that can operate at field strengths of less than 0.5 Tesla using an inexpensive permanent magnet and spin dependent nanoscale magnetic devices. This portable NMR system is intended to analyze samples as small as a few nanoliters.^ The main problem to resolve when downscaling the variables is to obtain an NMR signal with high Signal-To-Noise-Ratio (SNR). A special Tunneling Magneto-Resistive (TMR) sensor design was developed to achieve this goal. The minimum specifications for each component of the proposed NMR system were established. A complete NMR system was designed based on these minimum requirements. The goat was always to find cost effective realistic components. The novel design of the NMR system uses technologies such as Direct Digital Synthesis (DDS), Digital Signal Processing (DSP) and a special Backpropagation Neural Network that finds the best match of the NMR spectrum. The system was designed, calculated and simulated with excellent results.^ In addition, a general method to design TMR Sensors was developed. The technique was automated and a computer program was written to help the designer perform this task interactively.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A report from the National Institutes of Health defines a disease biomarker as a “characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention.” Early diagnosis is a crucial factor for incurable disease such as cancer and Alzheimer’s disease (AD). During the last decade researchers have discovered that biochemical changes caused by a disease can be detected considerably earlier as compared to physical manifestations/symptoms. In this dissertation electrochemical detection was utilized as the detection strategy as it offers high sensitivity/specificity, ease of operation, and capability of miniaturization and multiplexed detection. Electrochemical detection of biological analytes is an established field, and has matured at a rapid pace during the last 50 years and adapted itself to advances in micro/nanofabrication procedures. Carbon fiber microelectrodes were utilized as the platform sensor due to their high signal to noise ratio, ease and low-cost of fabrication, biocompatibility, and active carbon surface which allows conjugation with biorecognition moieties. This dissertation specifically focuses on the detection of 3 extensively validated biomarkers for cancer and AD. Firstly, vascular endothelial growth factor (VEGF) a cancer biomarker was detected using a one-step, reagentless immunosensing strategy. The immunosensing strategy allowed a rapid and sensitive means of VEGF detection with a detection limit of about 38 pg/mL with a linear dynamic range of 0–100 pg/mL. Direct detection of AD-related biomarker amyloid beta (Aβ) was achieved by exploiting its inherent electroactivity. The quantification of the ratio of Aβ1-40/42 (or Aβ ratio) has been established as a reliable test to diagnose AD through human clinical trials. Triple barrel carbon fiber microelectrodes were used to simultaneously detect Aβ1-40 and Aβ1-42 in cerebrospinal fluid from rats within a detection range of 100nM to 1.2μM and 400nM to 1μM respectively. In addition, the release of DNA damage/repair biomarker 8-hydroxydeoxyguanine (8-OHdG) under the influence of reactive oxidative stress from single lung endothelial cell was monitored using an activated carbon fiber microelectrode. The sensor was used to test the influence of nicotine, which is one of the most biologically active chemicals present in cigarette smoke and smokeless tobacco.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate if physical measures of noise predict image quality at high and low noise levels. Method: Twenty-four images were acquired on a DR system using a Pehamed DIGRAD phantom at three kVp settings (60, 70 and 81) across a range of mAs values. The image acquisition setup consisted of 14 cm of PMMA slabs with the phantom placed in the middle at 120 cm SID. Signal-to-noise ratio (SNR) and Contrast-tonoise ratio (CNR) were calculated for each of the images using ImageJ software and 14 observers performed image scoring. Images were scored according to the observer`s evaluation of objects visualized within the phantom. Results: The R2 values of the non-linear relationship between objective visibility score and CNR (60kVp R2 = 0.902; 70Kvp R2 = 0.913; 80kVp R2 = 0.757) demonstrate a better fit for all 3 kVp settings than the linear R2 values. As CNR increases for all kVp settings the Object Visibility also increases. The largest increase for SNR at low exposure values (up to 2 mGy) is observed at 60kVp, when compared with 70 or 81kVp.CNR response to exposure is similar. Pearson r was calculated to assess the correlation between Score, OV, SNR and CNR. None of the correlations reached a level of statistical significance (p>0.01). Conclusion: For object visibility and SNR, tube potential variations may play a role in object visibility. Higher energy X-ray beam settings give lower SNR but higher object visibility. Object visibility and CNR at all three tube potentials are similar, resulting in a strong positive relationship between CNR and object visibility score. At low doses the impact of radiographic noise does not have a strong influence on object visibility scores because in noisy images objects could still be identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atmospheric scattering plays a crucial rule in degrading the performance of electro optical imaging systems operating in the visible and infra-red spectral bands, and hence limits the quality of the acquired images, either through reduction of contrast or increase of image blur. The exact nature of light scattering by atmospheric media is highly complex and depends on the types, orientations, sizes and distributions of particles constituting these media, as well as wavelengths, polarization states and directions of the propagating radiation. Here we follow the common approach for solving imaging and propagation problems by treating the propagating light through atmospheric media as composed of two main components: a direct (unscattered), and a scattered component. In this work we developed a detailed model of the effects of absorption and scattering by haze and fog atmospheric aerosols on the optical radiation propagating from the object plane to an imaging system, based on the classical theory of EM scattering. This detailed model is then used to compute the average point spread function (PSF) of an imaging system which properly accounts for the effects of the diffraction, scattering, and the appropriate optical power level of both the direct and the scattered radiation arriving at the pupil of the imaging system. Also, the calculated PSF, properly weighted for the energy contributions of the direct and scattered components is used, in combination with a radiometric model, to estimate the average number of the direct and scattered photons detected at the sensor plane, which are then used to calculate the image spectrum signal to- noise ratio (SNR) in the visible near infra-red (NIR) and mid infra-red (MIR) spectral wavelength bands. Reconstruction of images degraded by atmospheric scattering and measurement noise is then performed, up to the limit imposed by the noise effective cutoff spatial frequency of the image spectrum SNR. Key results of this research are as follows: A mathematical model based on Mie scattering theory for how scattering from aerosols affects the overall point spread function (PSF) of an imaging system was developed, coded in MATLAB, and demonstrated. This model along with radiometric theory was used to predict the limiting resolution of an imaging system as a function of the optics, scattering environment, and measurement noise. Finally, image reconstruction algorithms were developed and demonstrated which mitigate the effects of scattering-induced blurring to within the limits imposed by noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Peripherally inserted central catheters (PICCs) are a common vascular access device used in clinical practice. Their use may be complicated by adverse events such as venous thromboembolism (VTE). The size of the vein used for PICC insertion and thus the catheter to vein ratio is thought to be a controllable factor in the reduction of VTE rates in patients who have a PICC. However, an optimal catheter to vein ratio for PICC insertion has not previously been investigated to inform clinical practice. OBJECTIVES: To determine the effect of the catheter to vein ratio (proportion of the vein measured at the insertion point taken up by the catheter) on rates of symptomatic VTE in patients with a PICC and identify the optimal ratio cut-off point to reduce rates of this adverse event. METHOD: Adult patients waiting for PICC insertion at a large metropolitan teaching hospital were recruited between May and December 2013. Vein diameter at the PICC insertion site was measured using ultrasound with in-built callipers. Participants were followed up at eight weeks to determine if they developed symptomatic VTE. RESULTS: Data were available for 136 patients (50% cancer; 44% infection; 6% other indication for PICC). Mean age was 57 years with 54% males. There were four cases of confirmed symptomatic VTE (two involving the deep veins, one peripheral vein and one pulmonary embolism). Receiver operator characteristic (ROC) analysis determined that a 45% catheter to vein ratio was the ideal cut off point to maximise sensitivity and specificity (AUC 0.761; 95% CI 0.681-0.830). When a ratio of 46% or above was compared to one that was less than or equal to 45% using a log binomial generalised linear model it was found that participants with a catheter to vein ratio >45% were 13 times more likely to suffer VTE (relative risk 13, p=0.022; CI 1.445-122.788). CONCLUSION: It was found that a 45% catheter to vein ratio was the optimal cut off with high sensitivity and specificity to reduce the risk of VTE. However, further research is needed to confirm these results as although adequately powered; the number of cases of VTE was comparatively small, resulting in wide confidence intervals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Indices predictive of central obesity include waist circumference (WC) and waist-to-height ratio (WHtR). The aims of this study were 1) to establish a Colombian youth smoothed centile charts and LMS tables for WC and WHtR and 2) to evaluate the utility of these parameters as predictors of overweight and obesity. Method: A cross-sectional study whose sample population comprised 7954 healthy Colombian schoolchildren [boys n=3460 and girls n=4494, mean (standard deviation) age 12.8 (2.3) years old]. Weight, height, body mass index (BMI), WC and WHtR and its percentiles were calculated. Appropriate cut-offs point of WC and WHtR for overweight and obesity, as defined by the International Obesity Task Force (IOTF) definitions, were selected using receiver operating characteristic (ROC) analysis. The discriminating power of WC and WHtR was expressed as area under the curve (AUC). Results: Reference values for WC and WHtR are presented. Mean WC increased and WHtR decreased with age for both genders. We found a moderate positive correlation between WC and BMI (r= 0.756, P < 0.01) and WHtR and BMI (r= 0.604, P < 0.01). The ROC analysis showed a high discrimination power in the identification of overweight and obesity for both measures in our sample population. Overall, WHtR was slightly a better predictor for overweight/obesity (AUC 95% CI 0.868-0.916) than the WC (AUC 95% CI 0.862-0.904). Conclusion: This paper presents the first sex- and age-specific WC and WHtR percentiles for both measures among Colombian children and adolescents aged 9–17.9 years. By providing LMS tables for Latin-American people based on Colombian reference data, we hope to provide quantitative tools for the study of obesity and its comorbidities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Speech recognition in car environments has been identified as a valuable means for reducing driver distraction when operating non-critical in-car systems. Likelihood-maximising (LIMA) frameworks optimise speech enhancement algorithms based on recognised state sequences rather than traditional signal-level criteria such as maximising signal-to-noise ratio. Previously presented LIMA frameworks require calibration utterances to generate optimised enhancement parameters which are used for all subsequent utterances. Sub-optimal recognition performance occurs in noise conditions which are significantly different from that present during the calibration session - a serious problem in rapidly changing noise environments. We propose a dialog-based design which allows regular optimisation iterations in order to track the changing noise conditions. Experiments using Mel-filterbank spectral subtraction are performed to determine the optimisation requirements for vehicular environments and show that minimal optimisation assists real-time operation with improved speech recognition accuracy. It is also shown that the proposed design is able to provide improved recognition performance over frameworks incorporating a calibration session.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide range of screening strategies have been employed to isolate antibodies and other proteins with specific attributes, including binding affinity, specificity, stability and improved expression. However, there remains no high-throughput system to screen for target-binding proteins in a mammalian, intracellular environment. Such a system would allow binding reagents to be isolated against intracellular clinical targets such as cell signalling proteins associated with tumour formation (p53, ras, cyclin E), proteins associated with neurodegenerative disorders (huntingtin, betaamyloid precursor protein), and various proteins crucial to viral replication (e.g. HIV-1 proteins such as Tat, Rev and Vif-1), which are difficult to screen by phage, ribosome or cell-surface display. This study used the β-lactamase protein complementation assay (PCA) as the display and selection component of a system for screening a protein library in the cytoplasm of HEK 293T cells. The colicin E7 (ColE7) and Immunity protein 7 (Imm7) *Escherichia coli* proteins were used as model interaction partners for developing the system. These proteins drove effective β-lactamase complementation, resulting in a signal-to-noise ratio (9:1 – 13:1) comparable to that of other β-lactamase PCAs described in the literature. The model Imm7-ColE7 interaction was then used to validate protocols for library screening. Single positive cells that harboured the Imm7 and ColE7 binding partners were identified and isolated using flow cytometric cell sorting in combination with the fluorescent β-lactamase substrate, CCF2/AM. A single-cell PCR was then used to amplify the Imm7 coding sequence directly from each sorted cell. With the screening system validated, it was then used to screen a protein library based the Imm7 scaffold against a proof-of-principle target. The wild-type Imm7 sequence, as well as mutants with wild-type residues in the ColE7- binding loop were enriched from the library after a single round of selection, which is consistent with other eukaryotic screening systems such as yeast and mammalian cell-surface display. In summary, this thesis describes a new technology for screening protein libraries in a mammalian, intracellular environment. This system has the potential to complement existing screening technologies by allowing access to intracellular proteins and expanding the range of targets available to the pharmaceutical industry.