975 resultados para Total-Variation Regularization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fluorescence confocal microscopy (FCM) is now one of the most important tools in biomedicine research. In fact, it makes it possible to accurately study the dynamic processes occurring inside the cell and its nucleus by following the motion of fluorescent molecules over time. Due to the small amount of acquired radiation and the huge optical and electronics amplification, the FCM images are usually corrupted by a severe type of Poisson noise. This noise may be even more damaging when very low intensity incident radiation is used to avoid phototoxicity. In this paper, a Bayesian algorithm is proposed to remove the Poisson intensity dependent noise corrupting the FCM image sequences. The observations are organized in a 3-D tensor where each plane is one of the images acquired along the time of a cell nucleus using the fluorescence loss in photobleaching (FLIP) technique. The method removes simultaneously the noise by considering different spatial and temporal correlations. This is accomplished by using an anisotropic 3-D filter that may be separately tuned in space and in time dimensions. Tests using synthetic and real data are described and presented to illustrate the application of the algorithm. A comparison with several state-of-the-art algorithms is also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il lavoro svolto in questa tesi è legato allo studio ed alla formulazione di metodi computazionali volti all’eliminazione del noise (rumore) presente nelle immagini, cioè il processo di “denoising” che è quello di ricostruire un’immagine corrotta da rumore avendo a disposizione una conoscenza a priori del fenomeno di degrado. Il problema del denoising è formulato come un problema di minimo di un funzionale dato dalla somma di una funzione che rappresenta l’adattamento dei dati e la Variazione Totale. I metodi di denoising di immagini saranno affrontati attraverso tecniche basate sullo split Bregman e la Total Variation (TV) pesata che è un problema mal condizionato, cioè un problema sensibile a piccole perturbazioni sui dati. Queste tecniche permettono di ottimizzare dal punto di vista della visualizzazione le immagini in esame.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We obtain upper bounds for the total variation distance between the distributions of two Gibbs point processes in a very general setting. Applications are provided to various well-known processes and settings from spatial statistics and statistical physics, including the comparison of two Lennard-Jones processes, hard core approximation of an area interaction process and the approximation of lattice processes by a continuous Gibbs process. Our proof of the main results is based on Stein's method. We construct an explicit coupling between two spatial birth-death processes to obtain Stein factors, and employ the Georgii-Nguyen-Zessin equation for the total bound.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focal point of this paper is to propose and analyze a P 0 discontinuous Galerkin (DG) formulation for image denoising. The scheme is based on a total variation approach which has been applied successfully in previous papers on image processing. The main idea of the new scheme is to model the restoration process in terms of a discrete energy minimization problem and to derive a corresponding DG variational formulation. Furthermore, we will prove that the method exhibits a unique solution and that a natural maximum principle holds. In addition, a number of examples illustrate the effectiveness of the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Apresentamos um método de inversão de dados gravimétricos para a reconstrução do relevo descontínuo do embasamento de bacias sedimentares, nas quais o contraste de densidade entre o pacote sedimentar e o embasamento são conhecidos a priori podendo apresentar-se constante, ou decrescer monotonicamente com a profundidade. A solução é estabilizada usando o funcional variação total (VT), o qual não penaliza variações abruptas nas soluções. Comparamos o métodoproposto com os métodos da suavidade global (SG), suavidade ponderada (SP) e regularização entrópica (RE) usando dados sintéticos produzidos por bacias 2D e 3D apresentando relevos descontínuos do embasamento. As soluções obtidas com o método proposto foram melhores do que aquelas obtidas com a SG e similares às produzidas pela SP e RE. Por outro lado, diferentemente da SP, o método proposto não necessita do conhecimento a priori sobre a profundidade máxima do embasamento. Comparado com a RE, o método VT é operacionalmente mais simples e requer a especificação de apenas um parâmetro de regularização. Os métodos VT, SG e SP foram aplicados, também, às seguintes áreas: Ponte do Poema (UFPA), Steptoe Valley (Nevada, Estados Unidos), Graben de San Jacinto (Califórnia, Estados Unidos) e Büyük Menderes (Turquia). A maioria destas áreas são caracterizadas pela presença de falhas com alto ângulo. Em todos os casos, a VT produziu estimativas para a topografia do embasamento apresentando descontinuidades bruscas e com alto ângulo, em concordância com a configuração tectônica das áreas em questão.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to its relationship with other properties, wood density is the main wood quality parameter. Modern, accurate methods - such as X-ray densitometry - are applied to determine the spatial distribution of density in wood sections and to evaluate wood quality. The objectives of this study were to determinate the influence of growing conditions on wood density variation and tree ring demarcation of gmelina trees from fast growing plantations in Costa Rica. The wood density was determined by X-ray densitometry method. Wood samples were cut from gmelina trees and were exposed to low X-rays. The radiographic films were developed and scanned using a 256 gray scale with 1000 dpi resolution and the wood density was determined by CRAD and CERD software. The results showed tree-ring boundaries were distinctly delimited in trees growing in site with rainfall lower than 25 10 mm/year. It was demonstrated that tree age, climatic conditions and management of plantation affects wood density and its variability. The specific effect of variables on wood density was quantified by for multiple regression method. It was determined that tree year explained 25.8% of the total variation of density and 19.9% were caused by climatic condition where the tree growing. Wood density was less affected by the intensity of forest management with 5.9% of total variation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An organism is built through a series of contingent factors, yet it is determined by historical, physical, and developmental constraints. A constraint should not be understood as an absolute obstacle to evolution, as it may also generate new possibilities for evolutionary change. Modularity is, in this context, an important way of organizing biological information and has been recognized as a central concept in evolutionary biology bridging on developmental, genetics, morphological, biochemical, and physiological studies. In this article, we explore how modularity affects the evolution of a complex system in two mammalian lineages by analyzing correlation, variance/covariance, and residual matrices (without size variation). We use the multivariate response to selection equation to simulate the behavior of Eutheria and Metharia skulls in terms of their evolutionary flexibility and constraints. We relate these results to classical approaches based on morphological integration tests based on functional/developmental hypotheses. Eutherians (Neotropical primates) showed smaller magnitudes of integration compared with Metatheria (didelphids) and also skull modules more clearly delimited. Didelphids showed higher magnitudes of integration and their modularity is strongly influenced by within-groups size variation to a degree that evolutionary responses are basically aligned with size variation. Primates still have a good portion of the total variation based on size; however, their enhanced modularization allows a broader spectrum of responses, more similar to the selection gradients applied (enhanced flexibility). Without size variation, both groups become much more similar in terms of modularity patterns and magnitudes and, consequently, in their evolutionary flexibility. J. Exp. Zool. (Mol. Dev. Evol.) 314B:663-683, 2010. (C) 2010 Wiley-Liss, Inc.