975 resultados para Total Variation
Resumo:
Fluorescence confocal microscopy (FCM) is now one of the most important tools in biomedicine research. In fact, it makes it possible to accurately study the dynamic processes occurring inside the cell and its nucleus by following the motion of fluorescent molecules over time. Due to the small amount of acquired radiation and the huge optical and electronics amplification, the FCM images are usually corrupted by a severe type of Poisson noise. This noise may be even more damaging when very low intensity incident radiation is used to avoid phototoxicity. In this paper, a Bayesian algorithm is proposed to remove the Poisson intensity dependent noise corrupting the FCM image sequences. The observations are organized in a 3-D tensor where each plane is one of the images acquired along the time of a cell nucleus using the fluorescence loss in photobleaching (FLIP) technique. The method removes simultaneously the noise by considering different spatial and temporal correlations. This is accomplished by using an anisotropic 3-D filter that may be separately tuned in space and in time dimensions. Tests using synthetic and real data are described and presented to illustrate the application of the algorithm. A comparison with several state-of-the-art algorithms is also presented.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
Il lavoro svolto in questa tesi è legato allo studio ed alla formulazione di metodi computazionali volti all’eliminazione del noise (rumore) presente nelle immagini, cioè il processo di “denoising” che è quello di ricostruire un’immagine corrotta da rumore avendo a disposizione una conoscenza a priori del fenomeno di degrado. Il problema del denoising è formulato come un problema di minimo di un funzionale dato dalla somma di una funzione che rappresenta l’adattamento dei dati e la Variazione Totale. I metodi di denoising di immagini saranno affrontati attraverso tecniche basate sullo split Bregman e la Total Variation (TV) pesata che è un problema mal condizionato, cioè un problema sensibile a piccole perturbazioni sui dati. Queste tecniche permettono di ottimizzare dal punto di vista della visualizzazione le immagini in esame.
Resumo:
We obtain upper bounds for the total variation distance between the distributions of two Gibbs point processes in a very general setting. Applications are provided to various well-known processes and settings from spatial statistics and statistical physics, including the comparison of two Lennard-Jones processes, hard core approximation of an area interaction process and the approximation of lattice processes by a continuous Gibbs process. Our proof of the main results is based on Stein's method. We construct an explicit coupling between two spatial birth-death processes to obtain Stein factors, and employ the Georgii-Nguyen-Zessin equation for the total bound.
Resumo:
In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.
Resumo:
The focal point of this paper is to propose and analyze a P 0 discontinuous Galerkin (DG) formulation for image denoising. The scheme is based on a total variation approach which has been applied successfully in previous papers on image processing. The main idea of the new scheme is to model the restoration process in terms of a discrete energy minimization problem and to derive a corresponding DG variational formulation. Furthermore, we will prove that the method exhibits a unique solution and that a natural maximum principle holds. In addition, a number of examples illustrate the effectiveness of the method.
Resumo:
Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.
Resumo:
Due to its relationship with other properties, wood density is the main wood quality parameter. Modern, accurate methods - such as X-ray densitometry - are applied to determine the spatial distribution of density in wood sections and to evaluate wood quality. The objectives of this study were to determinate the influence of growing conditions on wood density variation and tree ring demarcation of gmelina trees from fast growing plantations in Costa Rica. The wood density was determined by X-ray densitometry method. Wood samples were cut from gmelina trees and were exposed to low X-rays. The radiographic films were developed and scanned using a 256 gray scale with 1000 dpi resolution and the wood density was determined by CRAD and CERD software. The results showed tree-ring boundaries were distinctly delimited in trees growing in site with rainfall lower than 25 10 mm/year. It was demonstrated that tree age, climatic conditions and management of plantation affects wood density and its variability. The specific effect of variables on wood density was quantified by for multiple regression method. It was determined that tree year explained 25.8% of the total variation of density and 19.9% were caused by climatic condition where the tree growing. Wood density was less affected by the intensity of forest management with 5.9% of total variation.
Resumo:
An organism is built through a series of contingent factors, yet it is determined by historical, physical, and developmental constraints. A constraint should not be understood as an absolute obstacle to evolution, as it may also generate new possibilities for evolutionary change. Modularity is, in this context, an important way of organizing biological information and has been recognized as a central concept in evolutionary biology bridging on developmental, genetics, morphological, biochemical, and physiological studies. In this article, we explore how modularity affects the evolution of a complex system in two mammalian lineages by analyzing correlation, variance/covariance, and residual matrices (without size variation). We use the multivariate response to selection equation to simulate the behavior of Eutheria and Metharia skulls in terms of their evolutionary flexibility and constraints. We relate these results to classical approaches based on morphological integration tests based on functional/developmental hypotheses. Eutherians (Neotropical primates) showed smaller magnitudes of integration compared with Metatheria (didelphids) and also skull modules more clearly delimited. Didelphids showed higher magnitudes of integration and their modularity is strongly influenced by within-groups size variation to a degree that evolutionary responses are basically aligned with size variation. Primates still have a good portion of the total variation based on size; however, their enhanced modularization allows a broader spectrum of responses, more similar to the selection gradients applied (enhanced flexibility). Without size variation, both groups become much more similar in terms of modularity patterns and magnitudes and, consequently, in their evolutionary flexibility. J. Exp. Zool. (Mol. Dev. Evol.) 314B:663-683, 2010. (C) 2010 Wiley-Liss, Inc.
Resumo:
Apresentamos um novo método de inversão linear bidimensional de dados gravimétricos produzidos por bacias sedimentares com relevo do embasamento descontínuo. O método desenvolvido utiliza um modelo interpretativo formado por um conjunto de fitas horizontais bidimensionais justapostas cujas espessuras são os parâmetros a serem estimados. O contraste de densidade entre o embasamento e os sedimentos é presumido constante e conhecido. As estimativas das espessuras foram estabilizadas com o funcional da Variação Total (VT) que permite soluções apresentando descontinuidades locais no relevo do embasamento. As estimativas do relevo são obtidas através da resolução de um sistema de equações lineares, resolvido na norma L1. Como métodos lineares subestimam as estimativas de profundidade do embasamento de bacias maiores que cerca de 500 m, amplificamos as estimativas de profundidade através da modificação da matriz associada ao modelo interpretativo de fitas. As estimativas obtidas através deste procedimento são em geral ligeiramente superestimadas. Desse modo, elas são corrigidas através de uma correção definida pela expressão da placa Bouguer. Testes em dados sintéticos e reais produziram resultados comparáveis aos produzidos pelo método não linear, mas exigiram menor tempo computacional. A razão R entre os tempos exigidos pelo método não linear e o método proposto cresce com o número de observações e parâmetros. Por exemplo, para 60 observações e 60 parâmetros, R é igual a 4, enquanto para 2500 observações e 2500 parâmetros R cresce para 16,8. O método proposto e o método de inversão não linear foram aplicados também em dados reais do Steptoe Valley, Nevada, Estados Unidos, e da ponte do POEMA, no Campus do Guamá em Belém, produzindo soluções similares às obtidas com o método não linear exigindo menor tempo computacional.
Resumo:
Apresentamos um método de inversão de dados gravimétricos para a reconstrução do relevo descontínuo do embasamento de bacias sedimentares, nas quais o contraste de densidade entre o pacote sedimentar e o embasamento são conhecidos a priori podendo apresentar-se constante, ou decrescer monotonicamente com a profundidade. A solução é estabilizada usando o funcional variação total (VT), o qual não penaliza variações abruptas nas soluções. Comparamos o métodoproposto com os métodos da suavidade global (SG), suavidade ponderada (SP) e regularização entrópica (RE) usando dados sintéticos produzidos por bacias 2D e 3D apresentando relevos descontínuos do embasamento. As soluções obtidas com o método proposto foram melhores do que aquelas obtidas com a SG e similares às produzidas pela SP e RE. Por outro lado, diferentemente da SP, o método proposto não necessita do conhecimento a priori sobre a profundidade máxima do embasamento. Comparado com a RE, o método VT é operacionalmente mais simples e requer a especificação de apenas um parâmetro de regularização. Os métodos VT, SG e SP foram aplicados, também, às seguintes áreas: Ponte do Poema (UFPA), Steptoe Valley (Nevada, Estados Unidos), Graben de San Jacinto (Califórnia, Estados Unidos) e Büyük Menderes (Turquia). A maioria destas áreas são caracterizadas pela presença de falhas com alto ângulo. Em todos os casos, a VT produziu estimativas para a topografia do embasamento apresentando descontinuidades bruscas e com alto ângulo, em concordância com a configuração tectônica das áreas em questão.
Resumo:
Background Tef [Eragrostis tef (Zucc.) Trotter] is the major cereal crop of Ethiopia where it is annually cultivated on more than three million hectares of land by over six million small-scale farmers. It is broadly grouped into white and brown-seeded type depending on grain color, although some intermediate color grains also exist. Earlier breeding experiments focused on white-seeded tef, and a number of improved varieties were released to the farming community. Thirty-six brown-seeded tef genotypes were evaluated using a 6 × 6 simple lattice design at three locations in the central highlands of Ethiopia to assess the productivity, heritability, and association among major pheno-morphic traits. Results The mean square due to genotypes, locations, and genotype by locations were significant (P < 0.01) for all traits studied. Genotypic and phenotypic coefficients of variations ranged from 2.5 to 20.3 % and from 4.3 to 21.7 %, respectively. Grain yield showed significant (P < 0.01) genotypic correlation with shoot biomass and harvest index, while it had highly significant (P < 0.01) phenotypic correlation with all the traits evaluated. Besides, association of lodging index with biomass and grain yield was negative and significant at phenotypic level while it was not significant at genotypic level. Cluster analysis grouped the 36 test genotypes into seven distinct classes. Furthermore, the first three principal components with eigenvalues greater than unity extracted 78.3 % of the total variation. Conclusion The current study, generally, revealed the identification of genotypes with superior grain yield and other desirable traits for further evaluation and eventual release to the farming community.