74 resultados para Denoising autoencoder


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a new partial differential equation based method is presented with a view to denoising images having textures. The proposed model combines a nonlinear anisotropic diffusion filter with recent harmonic analysis techniques. A wave atom shrinkage allied to detection by gradient technique is used to guide the diffusion process so as to smooth and maintain essential image characteristics. Two forcing terms are used to maintain and improve edges, boundaries and oscillatory features of an image having irregular details and texture. Experimental results show the performance of our model for texture preserving denoising when compared to recent methods in literature. © 2009 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tesi tratta alcuni dei principali filtri di diffusione non lineari per il problema di denoising di immagini. In particolare pertendo dalla formulazione del problema come minimizzazione di un funzionale introduce modelli allo stato dell'arte come il filtro Lineare, di Perona Malik, alle Variazioni Totali e per Curvatura; infine un nuovo modello diffusivo sviluppato dal prof.Antonio Marquina è per la prima volta applicato al problema di denoising. Seguono numerosi schemi numerici alle differenze finite per risolverli, che generano procedimenti iterativi espliciti, impliciti e AOS. Verrà analizzato per la prima volta per il problema di denoising uno schema conservativo al prim'ordine e di ordine superiore, dopo evere proposto una modifica per rendere idoneo la funzione diffusività. Infine vi è un ampio capitolo con considerazioni numeriche e visive sui risultati sperimentali ottenuti.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il lavoro svolto in questa tesi è legato allo studio ed alla formulazione di metodi computazionali volti all’eliminazione del noise (rumore) presente nelle immagini, cioè il processo di “denoising” che è quello di ricostruire un’immagine corrotta da rumore avendo a disposizione una conoscenza a priori del fenomeno di degrado. Il problema del denoising è formulato come un problema di minimo di un funzionale dato dalla somma di una funzione che rappresenta l’adattamento dei dati e la Variazione Totale. I metodi di denoising di immagini saranno affrontati attraverso tecniche basate sullo split Bregman e la Total Variation (TV) pesata che è un problema mal condizionato, cioè un problema sensibile a piccole perturbazioni sui dati. Queste tecniche permettono di ottimizzare dal punto di vista della visualizzazione le immagini in esame.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: In ictal scalp electroencephalogram (EEG) the presence of artefacts and the wide ranging patterns of discharges are hurdles to good diagnostic accuracy. Quantitative EEG aids the lateralization and/or localization process of epileptiform activity. METHODS: Twelve patients achieving Engel Class I/IIa outcome following temporal lobe surgery (1 year) were selected with approximately 1-3 ictal EEGs analyzed/patient. The EEG signals were denoised with discrete wavelet transform (DWT), followed by computing the normalized absolute slopes and spatial interpolation of scalp topography associated to detection of local maxima. For localization, the region with the highest normalized absolute slopes at the time when epileptiform activities were registered (>2.5 times standard deviation) was designated as the region of onset. For lateralization, the cerebral hemisphere registering the first appearance of normalized absolute slopes >2.5 times the standard deviation was designated as the side of onset. As comparison, all the EEG episodes were reviewed by two neurologists blinded to clinical information to determine the localization and lateralization of seizure onset by visual analysis. RESULTS: 16/25 seizures (64%) were correctly localized by the visual method and 21/25 seizures (84%) by the quantitative EEG method. 12/25 seizures (48%) were correctly lateralized by the visual method and 23/25 seizures (92%) by the quantitative EEG method. The McNemar test showed p=0.15 for localization and p=0.0026 for lateralization when comparing the two methods. CONCLUSIONS: The quantitative EEG method yielded significantly more seizure episodes that were correctly lateralized and there was a trend towards more correctly localized seizures. SIGNIFICANCE: Coupling DWT with the absolute slope method helps clinicians achieve a better EEG diagnostic accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a method that robustly combines color and feature buffers to denoise Monte Carlo renderings. On one hand, feature buffers, such as per pixel normals, textures, or depth, are effective in determining denoising filters because features are highly correlated with rendered images. Filters based solely on features, however, are prone to blurring image details that are not well represented by the features. On the other hand, color buffers represent all details, but they may be less effective to determine filters because they are contaminated by the noise that is supposed to be removed. We propose to obtain filters using a combination of color and feature buffers in an NL-means and cross-bilateral filtering framework. We determine a robust weighting of colors and features using a SURE-based error estimate. We show significant improvements in subjective and quantitative errors compared to the previous state-of-the-art. We also demonstrate adaptive sampling and space-time filtering for animations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image denoising methods have been implemented in both spatial and transform domains. Each domain has its advantages and shortcomings, which can be complemented by each other. State-of-the-art methods like block-matching 3D filtering (BM3D) therefore combine both domains. However, implementation of such methods is not trivial. We offer a hybrid method that is surprisingly easy to implement and yet rivals BM3D in quality.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in three-dimensional (313) electron microscopy (EM) and image processing are providing considerable improvements in the resolution of subcellular volumes, macromolecular assemblies and individual proteins. However, the recovery of high-frequency information from biological samples is hindered by specimen sensitivity to beam damage. Low dose electron cryo-microscopy conditions afford reduced beam damage but typically yield images with reduced contrast and low signal-to-noise ratios (SNRs). Here, we describe the properties of a new discriminative bilateral (DBL) filter that is based upon the bilateral filter implementation of Jiang et al. (Jiang, W., Baker, M.L., Wu, Q., Bajaj, C., Chin, W., 2003. Applications of a bilateral denoising filter in biological electron microscopy. J. Struc. Biol. 128, 82-97.). In contrast to the latter, the DBL filter can distinguish between object edges and high-frequency noise pixels through the use of an additional photometric exclusion function. As a result, high frequency noise pixels are smoothed, yet object edge detail is preserved. In the present study, we show that the DBL filter effectively reduces noise in low SNR single particle data as well as cellular tomograms of stained plastic sections. The properties of the DBL filter are discussed in terms of its usefulness for single particle analysis and for pre-processing cellular tomograms ahead of image segmentation. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 68T01, 62H30, 32C09.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il seguente lavoro si propone come analisi degli operatori convoluzionali che caratterizzano le graph neural networks. ln particolare, la trattazione si divide in due parti, una teorica e una sperimentale. Nella parte teorica vengono innanzitutto introdotte le nozioni preliminari di mesh e convoluzione su mesh. In seguito vengono riportati i concetti base del geometric deep learning, quali le definizioni degli operatori convoluzionali e di pooling e unpooling. Un'attenzione particolare è stata data all'architettura Graph U-Net. La parte sperimentare riguarda l'applicazione delle reti neurali e l'analisi degli operatori convoluzionali applicati al denoising di superfici perturbate a causa di misurazioni imperfette effettuate da scanner 3D.