79 resultados para Contrast-to-noise ratio
em Indian Institute of Science - Bangalore - Índia
Resumo:
This paper describes the work related to characterisation of an ultrasonic transducer fabricated in the laboratory. The response of the medium to the ultrasonic wave was obtained by converting the time domain signal to frequency domain, using the FFT algorithm. Cross-correlation technique was adopted to increase the S/N ratio in the raw time domain signal and subsequently, to determine the ultrasonic velocity in the medium.
Resumo:
Four-dimensional fluorescence microscopy-which records 3D image information as a function of time-provides an unbiased way of tracking dynamic behavior of subcellular components in living samples and capturing key events in complex macromolecular processes. Unfortunately, the combination of phototoxicity and photobleaching can severely limit the density or duration of sampling, thereby limiting the biological information that can be obtained. Although widefield microscopy provides a very light-efficient way of imaging, obtaining high-quality reconstructions requires deconvolution to remove optical aberrations. Unfortunately, most deconvolution methods perform very poorly at low signal-to-noise ratios, thereby requiring moderate photon doses to obtain acceptable resolution. We present a unique deconvolution method that combines an entropy-based regularization function with kernels that can exploit general spatial characteristics of the fluorescence image to push the required dose to extreme low levels, resulting in an enabling technology for high-resolution in vivo biological imaging.
Resumo:
Resistivity imaging of a reconfigurable phantom with circular inhomogeneities is studied with a simple instrumentation and data acquisition system for Electrical Impedance Tomography. The reconfigurable phantom is developed with stainless steel electrodes and a sinusoidal current of constant amplitude is injected to the phantom boundary using opposite current injection protocol. Nylon and polypropylene cylinders with different cross sectional areas are kept inside the phantom and the boundary potential data are collected. The instrumentation and the data acquisition system with a DIP switch-based multiplexer board are used to inject a constant current of desired amplitude and frequency. Voltage data for the first eight current patterns (128 voltage data) are found to be sufficient to reconstruct the inhomogeneities and hence the acquisition time is reduced. Resistivity images are reconstructed from the boundary data for different inhomogeneity positions using EIDORS-2D. The results show that the shape and resistivity of the inhomogeneity as well as the background resistivity are successfully reconstructed from the potential data for single or double inhomogeneity phantoms. The resistivity images obtained from the single and double inhomogeneity phantom clearly indicate the inhomogeneity as the high resistive material. Contrast to noise ratio (CNR) and contrast recovery (CR) of the reconstructed images are found high for the inhomogeneities near all the electrodes arbitrarily chosen for the entire study. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Practical phantoms are essential to assess the electrical impedance tomography (EIT) systems for their validation, calibration and comparison purposes. Metal surface electrodes are generally used in practical phantoms which reduce the SNR of the boundary data due to their design and development errors. Novel flexible and biocompatible gold electrode arrays of high geometric precision are proposed to improve the boundary data quality in EIT. The flexible gold electrode arrays are developed on flexible FR4 sheets using thin film technology and practical gold electrode phantoms are developed with different configurations. Injecting a constant current to the phantom boundary the surface potentials are measured by a LabVIEW based data acquisition system and the resistivity images are reconstructed in EIDORS. Boundary data profile and the resistivity images obtained from the gold electrode phantoms are compared with identical phantoms developed with stainless steel electrodes. Surface profilometry, microscopy and the impedance spectroscopy show that the gold electrode arrays are smooth, geometrically precised and less resistive. Results show that the boundary data accuracy and image quality are improved with gold electrode arrays. Results show that the diametric resistivity plot (DRP), contrast to noise ratio (CNR), percentage of contrast recovery (PCR) and coefficient of contrast (COC) of reconstructed images are improved in gold electrode phantoms. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
A novel Projection Error Propagation-based Regularization (PEPR) method is proposed to improve the image quality in Electrical Impedance Tomography (EIT). PEPR method defines the regularization parameter as a function of the projection error developed by difference between experimental measurements and calculated data. The regularization parameter in the reconstruction algorithm gets modified automatically according to the noise level in measured data and ill-posedness of the Hessian matrix. Resistivity imaging of practical phantoms in a Model Based Iterative Image Reconstruction (MoBIIR) algorithm as well as with Electrical Impedance Diffuse Optical Reconstruction Software (EIDORS) with PEPR. The effect of PEPR method is also studied with phantoms with different configurations and with different current injection methods. All the resistivity images reconstructed with PEPR method are compared with the single step regularization (STR) and Modified Levenberg Regularization (LMR) techniques. The results show that, the PEPR technique reduces the projection error and solution error in each iterations both for simulated and experimental data in both the algorithms and improves the reconstructed images with better contrast to noise ratio (CNR), percentage of contrast recovery (PCR), coefficient of contrast (COC) and diametric resistivity profile (DRP). (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
A defect-selective photothermal imaging system for the diagnostics of optical coatings is demonstrated. The instrument has been optimized for pump and probe parameters, detector performance, and signal processing algorithm. The imager is capable of mapping purely optical or thermal defects efficiently in coatings of low damage threshold and low absorbance. Detailed mapping of minor inhomogeneities at low pump power has been achieved through the simultaneous action of a low-noise fiber optic photothermal beam defection sensor and a common-mode-rejection demodulation (CMRD) technique. The linearity and sensitivity of the sensor have been examined theoretically and experimentally, and the signal to noise ratio improvement factor is found to be about 110 compared to a conventional bicell photodiode. The scanner is so designed that mapping of static or shock sensitive samples is possible. In the case of a sample with absolute absorptance of 3.8 x 10(-4), a change in absorptance of about 0.005 x 10(-4) has been detected without ambiguity, ensuring a contrast parameter of 760. This is about 1085% improvement over the conventional approach containing a bicell photodiode, at the same pump power. The merits of the system have been demonstrated by mapping two intentionally created damage sites in a MgF2 coating on fused silica at different excitation powers. Amplitude and phase maps were recorded for thermally thin and thick cases, and the results are compared to demonstrate a case which, in conventional imaging, would lead to a deceptive conclusion regarding the type and location of the damage. Also, a residual damage profile created by long term irradiation with high pump power density has been depicted.
Resumo:
Based on a simple picture of speckle phenomena in optical interferometry it is shown that the recent signal-to-noise ratio estimate for the so called bispectrum, due to Wirnitzer (1985), does not possess the right limit when photon statistics is unimportant. In this wave-limit, which is true for bright sources, his calculations over-estimate the signal-to-noise ratio for the bispectrum by a factor of the order of the square root of the number of speckles.
Resumo:
We are addressing a new problem of improving automatic speech recognition performance, given multiple utterances of patterns from the same class. We have formulated the problem of jointly decoding K multiple patterns given a single Hidden Markov Model. It is shown that such a solution is possible by aligning the K patterns using the proposed Multi Pattern Dynamic Time Warping algorithm followed by the Constrained Multi Pattern Viterbi Algorithm The new formulation is tested in the context of speaker independent isolated word recognition for both clean and noisy patterns. When 10 percent of speech is affected by a burst noise at -5 dB Signal to Noise Ratio (local), it is shown that joint decoding using only two noisy patterns reduces the noisy speech recognition error rate to about 51 percent, when compared to the single pattern decoding using the Viterbi Algorithm. In contrast a simple maximization of individual pattern likelihoods, provides only about 7 percent reduction in error rate.
Resumo:
It has been shown that the conventional practice of designing a compensated hot wire amplifier with a fixed ceiling to floor ratio results in considerable and unnecessary increase in noise level at compensation settings other than optimum (which is at the maximum compensation at the highest frequency of interest). The optimum ceiling to floor ratio has been estimated to be between 1.5-2.0 ωmaxM. Application of the above considerations to an amplifier in which the ceiling to floor ratio is optimized at each compensation setting (for a given amplifier band-width), shows the usefulness of the method in improving the signal to noise ratio.
Resumo:
An expression for the spectrum and cross spectrum of an acoustic field measured at two vertically separated sensors in shallow water has been obtained for any correlated noise sources distributed over the surface. Numerical results are presented for the case where the noise sources, white noise and wind-induced colored noise, are contained within a circular disk centered over the sensors. The acoustic field is generally inhomogeneous except when the channel is deep. The coherence function becomes real for a large disk, for a radius greater than 25 times the depth of the channel, decreases with further increase of the size of the disk, and finally tapers off after certain limiting size, approximately given by 1/alpha, where alpha is the attenuation coefficient.
Resumo:
Combustion instability events in lean premixed combustion systems can cause spatio-temporal variations in unburnt mixture fuel/air ratio. This provides a driving mechanism for heat-release oscillations when they interact with the flame. Several Reduced Order Modelling (ROM) approaches to predict the characteristics of these oscillations have been developed in the past. The present paper compares results for flame describing function characteristics determined from a ROM approach based on the level-set method, with corresponding results from detailed, fully compressible reacting flow computations for the same two dimensional slot flame configuration. The comparison between these results is seen to be sensitive to small geometric differences in the shape of the nominally steady flame used in the two computations. When the results are corrected to account for these differences, describing function magnitudes are well predicted for frequencies lesser than and greater than a lower and upper cutoff respectively due to amplification of flame surface wrinkling by the convective Darrieus-Landau (DL) instability. However, good agreement in describing function phase predictions is seen as the ROM captures the transit time of wrinkles through the flame correctly. Also, good agreement is seen for both magnitude and phase of the flame response, for large forcing amplitudes, at frequencies where the DL instability has a minimal influence. Thus, the present ROM can predict flame response as long as the DL instability, caused by gas expansion at the flame front, does not significantly alter flame front perturbation amplitudes as they traverse the flame. (C) 2012 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
We address the problem of detecting cells in biological images. The problem is important in many automated image analysis applications. We identify the problem as one of clustering and formulate it within the framework of robust estimation using loss functions. We show how suitable loss functions may be chosen based on a priori knowledge of the noise distribution. Specifically, in the context of biological images, since the measurement noise is not Gaussian, quadratic loss functions yield suboptimal results. We show that by incorporating the Huber loss function, cells can be detected robustly and accurately. To initialize the algorithm, we also propose a seed selection approach. Simulation results show that Huber loss exhibits better performance compared with some standard loss functions. We also provide experimental results on confocal images of yeast cells. The proposed technique exhibits good detection performance even when the signal-to-noise ratio is low.
Resumo:
We address the problem of speech enhancement using a risk- estimation approach. In particular, we propose the use the Stein’s unbiased risk estimator (SURE) for solving the problem. The need for a suitable finite-sample risk estimator arises because the actual risks invariably depend on the unknown ground truth. We consider the popular mean-squared error (MSE) criterion first, and then compare it against the perceptually-motivated Itakura-Saito (IS) distortion, by deriving unbiased estimators of the corresponding risks. We use a generalized SURE (GSURE) development, recently proposed by Eldar for MSE. We consider dependent observation models from the exponential family with an additive noise model,and derive an unbiased estimator for the risk corresponding to the IS distortion, which is non-quadratic. This serves to address the speech enhancement problem in a more general setting. Experimental results illustrate that the IS metric is efficient in suppressing musical noise, which affects the MSE-enhanced speech. However, in terms of global signal-to-noise ratio (SNR), the minimum MSE solution gives better results.
Resumo:
In this paper, a nonlinear suboptimal detector whose performance in heavy-tailed noise is significantly better than that of the matched filter is proposed. The detector consists of a nonlinear wavelet denoising filter to enhance the signal-to-noise ratio, followed by a replica correlator. Performance of the detector is investigated through an asymptotic theoretical analysis as well as Monte Carlo simulations. The proposed detector offers the following advantages over the optimal (in the Neyman-Pearson sense) detector: it is easier to implement, and it is more robust with respect to error in modeling the probability distribution of noise.