973 resultados para Denoising, noise, rumore, Total Cariation, Total Variation Weighted
Resumo:
Il lavoro svolto in questa tesi è legato allo studio ed alla formulazione di metodi computazionali volti all’eliminazione del noise (rumore) presente nelle immagini, cioè il processo di “denoising” che è quello di ricostruire un’immagine corrotta da rumore avendo a disposizione una conoscenza a priori del fenomeno di degrado. Il problema del denoising è formulato come un problema di minimo di un funzionale dato dalla somma di una funzione che rappresenta l’adattamento dei dati e la Variazione Totale. I metodi di denoising di immagini saranno affrontati attraverso tecniche basate sullo split Bregman e la Total Variation (TV) pesata che è un problema mal condizionato, cioè un problema sensibile a piccole perturbazioni sui dati. Queste tecniche permettono di ottimizzare dal punto di vista della visualizzazione le immagini in esame.
Resumo:
Image inpainting refers to restoring a damaged image with missing information. The total variation (TV) inpainting model is one such method that simultaneously fills in the regions with available information from their surroundings and eliminates noises. The method works well with small narrow inpainting domains. However there remains an urgent need to develop fast iterative solvers, as the underlying problem sizes are large. In addition one needs to tackle the imbalance of results between inpainting and denoising. When the inpainting regions are thick and large, the procedure of inpainting works quite slowly and usually requires a significant number of iterations and leads inevitably to oversmoothing in the outside of the inpainting domain. To overcome these difficulties, we propose a solution for TV inpainting method based on the nonlinear multi-grid algorithm.
Resumo:
The focal point of this paper is to propose and analyze a P 0 discontinuous Galerkin (DG) formulation for image denoising. The scheme is based on a total variation approach which has been applied successfully in previous papers on image processing. The main idea of the new scheme is to model the restoration process in terms of a discrete energy minimization problem and to derive a corresponding DG variational formulation. Furthermore, we will prove that the method exhibits a unique solution and that a natural maximum principle holds. In addition, a number of examples illustrate the effectiveness of the method.
Resumo:
We obtain upper bounds for the total variation distance between the distributions of two Gibbs point processes in a very general setting. Applications are provided to various well-known processes and settings from spatial statistics and statistical physics, including the comparison of two Lennard-Jones processes, hard core approximation of an area interaction process and the approximation of lattice processes by a continuous Gibbs process. Our proof of the main results is based on Stein's method. We construct an explicit coupling between two spatial birth-death processes to obtain Stein factors, and employ the Georgii-Nguyen-Zessin equation for the total bound.
Resumo:
In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.
Resumo:
Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.
Resumo:
Apresentamos um método de inversão de dados gravimétricos para a reconstrução do relevo descontínuo do embasamento de bacias sedimentares, nas quais o contraste de densidade entre o pacote sedimentar e o embasamento são conhecidos a priori podendo apresentar-se constante, ou decrescer monotonicamente com a profundidade. A solução é estabilizada usando o funcional variação total (VT), o qual não penaliza variações abruptas nas soluções. Comparamos o métodoproposto com os métodos da suavidade global (SG), suavidade ponderada (SP) e regularização entrópica (RE) usando dados sintéticos produzidos por bacias 2D e 3D apresentando relevos descontínuos do embasamento. As soluções obtidas com o método proposto foram melhores do que aquelas obtidas com a SG e similares às produzidas pela SP e RE. Por outro lado, diferentemente da SP, o método proposto não necessita do conhecimento a priori sobre a profundidade máxima do embasamento. Comparado com a RE, o método VT é operacionalmente mais simples e requer a especificação de apenas um parâmetro de regularização. Os métodos VT, SG e SP foram aplicados, também, às seguintes áreas: Ponte do Poema (UFPA), Steptoe Valley (Nevada, Estados Unidos), Graben de San Jacinto (Califórnia, Estados Unidos) e Büyük Menderes (Turquia). A maioria destas áreas são caracterizadas pela presença de falhas com alto ângulo. Em todos os casos, a VT produziu estimativas para a topografia do embasamento apresentando descontinuidades bruscas e com alto ângulo, em concordância com a configuração tectônica das áreas em questão.
Resumo:
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Resumo:
Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.
Resumo:
Apresentamos um novo método de inversão linear bidimensional de dados gravimétricos produzidos por bacias sedimentares com relevo do embasamento descontínuo. O método desenvolvido utiliza um modelo interpretativo formado por um conjunto de fitas horizontais bidimensionais justapostas cujas espessuras são os parâmetros a serem estimados. O contraste de densidade entre o embasamento e os sedimentos é presumido constante e conhecido. As estimativas das espessuras foram estabilizadas com o funcional da Variação Total (VT) que permite soluções apresentando descontinuidades locais no relevo do embasamento. As estimativas do relevo são obtidas através da resolução de um sistema de equações lineares, resolvido na norma L1. Como métodos lineares subestimam as estimativas de profundidade do embasamento de bacias maiores que cerca de 500 m, amplificamos as estimativas de profundidade através da modificação da matriz associada ao modelo interpretativo de fitas. As estimativas obtidas através deste procedimento são em geral ligeiramente superestimadas. Desse modo, elas são corrigidas através de uma correção definida pela expressão da placa Bouguer. Testes em dados sintéticos e reais produziram resultados comparáveis aos produzidos pelo método não linear, mas exigiram menor tempo computacional. A razão R entre os tempos exigidos pelo método não linear e o método proposto cresce com o número de observações e parâmetros. Por exemplo, para 60 observações e 60 parâmetros, R é igual a 4, enquanto para 2500 observações e 2500 parâmetros R cresce para 16,8. O método proposto e o método de inversão não linear foram aplicados também em dados reais do Steptoe Valley, Nevada, Estados Unidos, e da ponte do POEMA, no Campus do Guamá em Belém, produzindo soluções similares às obtidas com o método não linear exigindo menor tempo computacional.
Resumo:
This paper describes the relative influence of: (i) landscape scale environmental and hydrological factors; (ii) local scale environmental conditions including recent flow history, and; (iii) spatial effects (proximity of sites to one another) on the spatial and temporal variation in local freshwater fish assemblages in the Mary River, south-eastern Queensland, Australia. Using canonical correspondence analysis, each of the three sets of variables explained similar amounts of variation in fish assemblages (ranging from 44 to 52%). Variation in fish assemblages was partitioned into eight unique components: pure environmental, pure spatial, pure temporal, spatially structured environmental variation, temporally structured environmental variation, spatially structured temporal variation, the combined spatial/temporal component of environmental variation and unexplained variation. The total variation explained by these components was 65%. The combined spatial/temporal/environmental component explained the largest component (30%) of the total variation in fish assemblages, whereas pure environmental (6%), temporal (9%) and spatial (2%) effects were relatively unimportant. The high degree of intercorrelation between the three different groups of explanatory variables indicates that our understanding of the importance to fish assemblages of hydrological variation (often highlighted as the major structuring force in river systems) is dependent on the environmental context in which this role is examined.
Resumo:
Common diseases such as endometriosis (ED), Alzheimer's disease (AD) and multiple sclerosis (MS) account for a significant proportion of the health care burden in many countries. Genome-wide association studies (GWASs) for these diseases have identified a number of individual genetic variants contributing to the risk of those diseases. However, the effect size for most variants is small and collectively the known variants explain only a small proportion of the estimated heritability. We used a linear mixed model to fit all single nucleotide polymorphisms (SNPs) simultaneously, and estimated genetic variances on the liability scale using SNPs from GWASs in unrelated individuals for these three diseases. For each of the three diseases, case and control samples were not all genotyped in the same laboratory. We demonstrate that a careful analysis can obtain robust estimates, but also that insufficient quality control (QC) of SNPs can lead to spurious results and that too stringent QC is likely to remove real genetic signals. Our estimates show that common SNPs on commercially available genotyping chips capture significant variation contributing to liability for all three diseases. The estimated proportion of total variation tagged by all SNPs was 0.26 (SE 0.04) for ED, 0.24 (SE 0.03) for AD and 0.30 (SE 0.03) for MS. Further, we partitioned the genetic variance explained into five categories by a minor allele frequency (MAF), by chromosomes and gene annotation. We provide strong evidence that a substantial proportion of variation in liability is explained by common SNPs, and thereby give insights into the genetic architecture of the diseases.