999 resultados para Denoising Techniques


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the thesis was to design and develop spatially adaptive denoising techniques with edge and feature preservation, for images corrupted with additive white Gaussian noise and SAR images affected with speckle noise. Image denoising is a well researched topic. It has found multifaceted applications in our day to day life. Image denoising based on multi resolution analysis using wavelet transform has received considerable attention in recent years. The directionlet based denoising schemes presented in this thesis are effective in preserving the image specific features like edges and contours in denoising. Scope of this research is still open in areas like further optimization in terms of speed and extension of the techniques to other related areas like colour and video image denoising. Such studies would further augment the practical use of these techniques.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques. (C) 2015 SPIE and IS&T

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper proposes a denoising algorithm which performs non-local means bilateral filtering. As existing literature suggests, non-local means (NLM) is one of the widely used denoising techniques, but has a critical drawback of smoothing of edges. In order to improve this, we perform fast and efficient NLM using Approximate Nearest Neighbour Fields and improve the edge content in denoising by formulating a joint-bilateral filter. Using the proposed joint bilateral, we are able to denoise smooth regions using the NLM approach and efficient edge reconstruction is obtained from the bilateral filter. Furthermore, to avoid tedious parameter selection, we carry out a noise estimation before performing joint bilateral filtering. The proposed approach is observed to perform well on high noise images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Datuming which has not been well solved in complex areas is a long-existing problem in seismic processing and imaging. Theoretically, Wave-equation datuming(WED) works well in the areas with substantial surface topography and areas of complex velocity structure. However, many difficulties still exist in practice. There are three main reasons: (1) It’s difficult to obtain the velocity model. (2) The computational cost is high and the efficiency is low. (3) Reflection waveform distortions are introduced by low S/N ratio in seismic data. The second and third problems are involved in the paper. To improve computational efficiency, DP1 proposed by Fu Li-Yun is applied in WED. Some quantitative and semi-quantitative conclusions of assessing the computational accuracy and efficiency have been obtained by comparing the adaptation of three operators( PS, SSF, DP1) to the surface topography and the lateral velocity variation. Moreover, the impacts of near surface scattering associated with complex surface topography on WED is analyzed theoretically. According to the analysis results, the following conclusions have been obtained. WED is stable and effective when the field data has high S/N ratio and velocity model is accurate. However ,it doesn’t work well when S/N ratio of field data is low. So denoising techniques in process of WED is important for low S/N data. The paper presents the theoretical analysis for the issues facing WED, which is expected to provide a useful reference to the further development of this technology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work describes the use of a quadratic programming optimization procedure for designing asymmetric apodization windows to de-noise THz transient interferograms and compares these results to those obtained when wavelet signal processing algorithms are adopted. A systems identification technique in the wavelet domain is also proposed for the estimation of the complex insertion loss function. The proposed techniques can enhance the frequency dependent dynamic range of an experiment and should be of particular interest to the THz imaging and tomography community. Future advances in THz sources and detectors are likely to increase the signal-to-noise ratio of the recorded THz transients and high quality apodization techniques will become more important, and may set the limit on the achievable accuracy of the deduced spectrum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a new partial differential equation based method is presented with a view to denoising images having textures. The proposed model combines a nonlinear anisotropic diffusion filter with recent harmonic analysis techniques. A wave atom shrinkage allied to detection by gradient technique is used to guide the diffusion process so as to smooth and maintain essential image characteristics. Two forcing terms are used to maintain and improve edges, boundaries and oscillatory features of an image having irregular details and texture. Experimental results show the performance of our model for texture preserving denoising when compared to recent methods in literature. © 2009 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.