978 resultados para scale-space
Resumo:
Although it has been suggested that retinal vasculature is a diffusion-limited aggregation (DLA) fractal, no study has been dedicated to standardizing its fractal analysis . The aims of this project was to standardize a method to estimate the fractal dimensions of retinal vasculature and to characterize their normal values; to determine if this estimation is dependent on skeletization and on segmentation and calculation methods; to assess the suitability of the DLA model and to determine the usefulness of log-log graphs in characterizing vasculature fractality . To achieve these aims, the information, mass-radius and box counting dimensions of 20 eyes vasculatures were compared when the vessels were manually or computationally segmented; the fractal dimensions of the vasculatures of 60 eyes of healthy volunteers were compared with those of 40 DLA models and the log-log graphs obtained were compared with those of known fractals and those of non-fractals. The main results were: the fractal dimensions of vascular trees were dependent on segmentation methods and dimension calculation methods, but there was no difference between manual segmentation and scale-space, multithreshold and wavelet computational methods; the means of the information and box dimensions for arteriolar trees were 1.29. against 1.34 and 1.35 for the venular trees; the dimension for the DLA models were higher than that for vessels; the log-log graphs were straight, but with varying local slopes, both for vascular trees and for fractals and non-fractals. This results leads to the following conclusions: the estimation of the fractal dimensions for retinal vasculature is dependent on its skeletization and on the segmentation and calculation methods; log-log graphs are not suitable as a fractality test; the means of the information and box counting dimensions for the normal eyes were 1.47 and 1.43, respectively, and the DLA model with optic disc seeding is not sufficient for retinal vascularization modeling
Resumo:
The present work shows a novel fractal dimension method for shape analysis. The proposed technique extracts descriptors from a shape by applying a multi-scale approach to the calculus of the fractal dimension. The fractal dimension is estimated by applying the curvature scale-space technique to the original shape. By applying a multi-scale transform to the calculus, we obtain a set of descriptors which is capable of describing the shape under investigation with high precision. We validate the computed descriptors in a classification process. The results demonstrate that the novel technique provides highly reliable descriptors, confirming the efficiency of the proposed method. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4757226]
Resumo:
Non-commutative geometry indicates a deformation of the energy-momentum dispersion relation f (E) = E/pc (not equal 1) for massless particles. This distorted energy-momentum relation can affect the radiation-dominated phase of the universe at sufficiently high temperature. This prompted the idea of non-commutative inflation by Alexander et al (2003 Phys. Rev. D 67 081301) and Koh and Brandenberger (2007 JCAP06(2007) 021 and JCAP11(2007) 013). These authors studied a one-parameter family of a non-relativistic dispersion relation that leads to inflation: the a family of curves f (E) = 1 + (lambda E)(alpha). We show here how the conceptually different structure of symmetries of non-commutative spaces can lead, in a mathematically consistent way, to the fundamental equations of non-commutative inflation driven by radiation. We describe how this structure can be considered independently of (but including) the idea of non-commutative spaces as a starting point of the general inflationary deformation of SL(2, C). We analyze the conditions on the dispersion relation that leads to inflation as a set of inequalities which plays the same role as the slow-roll conditions on the potential of a scalar field. We study conditions for a possible numerical approach to obtain a general one-parameter family of dispersion relations that lead to successful inflation.
Resumo:
[EN] In this paper we show that a classic optical flow technique by Nagel and Enkelmann can be regarded as an early anisotropic diffusion method with a diffusion tensor. We introduce three improvements into the model formulation that avoid inconsistencies caused by centering the brightness term and the smoothness term in different images use a linear scale-space focusing strategy from coarse to fine scales for avoiding convergence to physically irrelevant local minima, and create an energy functional that is invariant under linear brightness changes. Applying a gradient descent method to the resulting energy functional leads to a system of diffusion-reaction equations. We prove that this system has a unique solution under realistic assumptions on the initial data, and we present an efficient linear implicit numerical scheme in detail. Our method creates flow fields with 100% density over the entire image domain, it is robust under a large range of parameter variations, and it can recover displacement fields that are far beyond the typical one-pixel limits which are characteristic for many differential methods for determining optical flow. We show that it performs better than the classic optical flow methods with 100% density that are evaluated by Barron et al. (1994). Our software is available from the Internet.
Resumo:
[EN] We present an energy based approach to estimate a dense disparity map from a set of two weakly calibrated stereoscopic images while preserving its discontinuities resulting from image boundaries. We first derive a simplified expression for the disparity that allows us to estimate it from a stereo pair of images using an energy minimization approach. We assume that the epipolar geometry is known, and we include this information in the energy model. Discontinuities are preserved by means of a regularization term based on the Nagel-Enkelmann operator. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method The resulting parabolic problem has a unique solution. In order to reduce the risk to be trapped within some irrelevant local minima during the iterations, we use a focusing strategy based on a linear scalespace. Experimental results on both synthetic and real images arere presented to illustrate the capabilities of this PDE and scale-space based method.
Resumo:
[EN] In this paper we present a new model for optical flow calculation using a variational formulation which preserves discontinuities of the flow much better than classical methods. We study the Euler-Lagrange equations asociated to the variational problem. In the case of quadratic energy, we show the existence and uniqueness of the corresponding evolution problem. Since our method avoid linearization in the optical flow constraint, it can recover large displacement in the scene. We avoid convergence to irrelevant local minima by embedding our method into a linear scale-space framework and using a focusing strategy from coarse to fine scales.
Resumo:
[EN] In this paper we present a method for the regularization of a set of unstructured 3D points obtained from a sequence of stereo images. This method takes into account the information supplied by the disparity maps computed between pairs of images to constraint the regularization of the set of 3D points. We propose a model based on an energy which is composed of two terms: an attachment term that minimizes the distance from 3D points to the projective lines of camera points, and a second term that allows for the regularization of the set of 3D points by preserving discontinuities presented on the disparity maps. We embed this energy in a 2D finite element method. After minimizing, this method results in a large system of equations that can be optimized for fast computations. We derive an efficient implicit numerical scheme which reduces the number of calculations and memory allocations.
Resumo:
[ES] En este trabajo proponemos un nuevo modelo para el cálculo de la disparidad y la reconstrucción 3-D a partir de un sistema estéreo compuesto por 2 imágenes en color. Proponemos un nuevo modelo para el cálculo de la disparidad basado en un criterio de energía. Para calcular los mínimos de este funcional de energía utilizamos la ecuación en derivadas parciales de Euler-Langrage asociada. Este modelo es una extensión a imágenes color del modelo desarrollado en "L. Alvarez, R. Deriche, J. Sánchez and J. Weickert, Dense disparity map estimation respecting image discontinuities : A PDE and Scale-Space Based Approach. INRIA Rapport de Recherche Nº 3874, 2000". Con algunos cambios en la estrategia parav evitar caer en mínimos locales de la energía. Por último presentamos algunas experiencias numéricas de la reconstrucción 3-D obtenida con este método en algunos pares estéreos de imágenes reales.
Resumo:
The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure.
Resumo:
En esta Tesis Doctoral se aborda la utilización de filtros de difusión no lineal para obtener imágenes constantes a trozos como paso previo al proceso de segmentación. En una primera parte se propone un formulación intrínseca para la ecuación de difusión no lineal que proporcione las condiciones de diseño necesarias sobre los filtros de difusión. A partir del marco teórico propuesto, se proporciona una nueva familia de difusividades; éstas son obtenidas a partir de técnicas de difusión no lineal relacionadas con los procesos de difusión regresivos. El objetivo es descomponer la imagen en regiones cerradas que sean homogéneas en sus niveles de grises sin contornos difusos. Asimismo, se prueba que la función de difusividad propuesta satisface las condiciones de un correcto planteamiento semi-discreto. Esto muestra que mediante el esquema semi-implícito habitualmente utilizado, realmente se hace un proceso de difusión no lineal directa, en lugar de difusión inversa, conectando con proceso de preservación de bordes. Bajo estas condiciones establecidas, se plantea un criterio de parada para el proceso de difusión, para obtener imágenes constantes a trozos con un bajo coste computacional. Una vez aplicado todo el proceso al caso unidimensional, se extienden los resultados teóricos, al caso de imágenes en 2D y 3D. Para el caso en 3D, se detalla el esquema numérico para el problema evolutivo no lineal, con condiciones de contorno Neumann homogéneas. Finalmente, se prueba el filtro propuesto para imágenes reales en 2D y 3D y se ilustran los resultados de la difusividad propuesta como método para obtener imágenes constantes a trozos. En el caso de imágenes 3D, se aborda la problemática del proceso previo a la segmentación del hígado, mediante imágenes reales provenientes de Tomografías Axiales Computarizadas (TAC). En ese caso, se obtienen resultados sobre la estimación de los parámetros de la función de difusividad propuesta. This Ph.D. Thesis deals with the case of using nonlinear diffusion filters to obtain piecewise constant images as a previous process for segmentation techniques. I have first shown an intrinsic formulation for the nonlinear diffusion equation to provide some design conditions on the diffusion filters. According to this theoretical framework, I have proposed a new family of diffusivities; they are obtained from nonlinear diffusion techniques and are related with backward diffusion. Their goal is to split the image in closed contours with a homogenized grey intensity inside and with no blurred edges. It has also proved that the proposed filters satisfy the well-posedness semi-discrete and full discrete scale-space requirements. This shows that by using semi-implicit schemes, a forward nonlinear diffusion equation is solved, instead of a backward nonlinear diffusion equation, connecting with an edgepreserving process. Under the conditions established for the diffusivity and using a stopping criterion I for the diffusion time, I have obtained piecewise constant images with a low computational effort. The whole process in the one-dimensional case is extended to the case where 2D and 3D theoretical results are applied to real images. For 3D, develops in detail the numerical scheme for nonlinear evolutionary problem with homogeneous Neumann boundary conditions. Finally, I have tested the proposed filter with real images for 2D and 3D and I have illustrated the effects of the proposed diffusivity function as a method to get piecewise constant images. For 3D I have developed a preprocess for liver segmentation with real images from CT (Computerized Tomography). In this case, I have obtained results on the estimation of the parameters of the given diffusivity function.
Resumo:
La tomografía axial computerizada (TAC) es la modalidad de imagen médica preferente para el estudio de enfermedades pulmonares y el análisis de su vasculatura. La segmentación general de vasos en pulmón ha sido abordada en profundidad a lo largo de los últimos años por la comunidad científica que trabaja en el campo de procesamiento de imagen; sin embargo, la diferenciación entre irrigaciones arterial y venosa es aún un problema abierto. De hecho, la separación automática de arterias y venas está considerado como uno de los grandes retos futuros del procesamiento de imágenes biomédicas. La segmentación arteria-vena (AV) permitiría el estudio de ambas irrigaciones por separado, lo cual tendría importantes consecuencias en diferentes escenarios médicos y múltiples enfermedades pulmonares o estados patológicos. Características como la densidad, geometría, topología y tamaño de los vasos sanguíneos podrían ser analizados en enfermedades que conllevan remodelación de la vasculatura pulmonar, haciendo incluso posible el descubrimiento de nuevos biomarcadores específicos que aún hoy en dípermanecen ocultos. Esta diferenciación entre arterias y venas también podría ayudar a la mejora y el desarrollo de métodos de procesamiento de las distintas estructuras pulmonares. Sin embargo, el estudio del efecto de las enfermedades en los árboles arterial y venoso ha sido inviable hasta ahora a pesar de su indudable utilidad. La extrema complejidad de los árboles vasculares del pulmón hace inabordable una separación manual de ambas estructuras en un tiempo realista, fomentando aún más la necesidad de diseñar herramientas automáticas o semiautomáticas para tal objetivo. Pero la ausencia de casos correctamente segmentados y etiquetados conlleva múltiples limitaciones en el desarrollo de sistemas de separación AV, en los cuales son necesarias imágenes de referencia tanto para entrenar como para validar los algoritmos. Por ello, el diseño de imágenes sintéticas de TAC pulmonar podría superar estas dificultades ofreciendo la posibilidad de acceso a una base de datos de casos pseudoreales bajo un entorno restringido y controlado donde cada parte de la imagen (incluyendo arterias y venas) está unívocamente diferenciada. En esta Tesis Doctoral abordamos ambos problemas, los cuales están fuertemente interrelacionados. Primero se describe el diseño de una estrategia para generar, automáticamente, fantomas computacionales de TAC de pulmón en humanos. Partiendo de conocimientos a priori, tanto biológicos como de características de imagen de CT, acerca de la topología y relación entre las distintas estructuras pulmonares, el sistema desarrollado es capaz de generar vías aéreas, arterias y venas pulmonares sintéticas usando métodos de crecimiento iterativo, que posteriormente se unen para formar un pulmón simulado con características realistas. Estos casos sintéticos, junto a imágenes reales de TAC sin contraste, han sido usados en el desarrollo de un método completamente automático de segmentación/separación AV. La estrategia comprende una primera extracción genérica de vasos pulmonares usando partículas espacio-escala, y una posterior clasificación AV de tales partículas mediante el uso de Graph-Cuts (GC) basados en la similitud con arteria o vena (obtenida con algoritmos de aprendizaje automático) y la inclusión de información de conectividad entre partículas. La validación de los fantomas pulmonares se ha llevado a cabo mediante inspección visual y medidas cuantitativas relacionadas con las distribuciones de intensidad, dispersión de estructuras y relación entre arterias y vías aéreas, los cuales muestran una buena correspondencia entre los pulmones reales y los generados sintéticamente. La evaluación del algoritmo de segmentación AV está basada en distintas estrategias de comprobación de la exactitud en la clasificación de vasos, las cuales revelan una adecuada diferenciación entre arterias y venas tanto en los casos reales como en los sintéticos, abriendo así un amplio abanico de posibilidades en el estudio clínico de enfermedades cardiopulmonares y en el desarrollo de metodologías y nuevos algoritmos para el análisis de imágenes pulmonares. ABSTRACT Computed tomography (CT) is the reference image modality for the study of lung diseases and pulmonary vasculature. Lung vessel segmentation has been widely explored by the biomedical image processing community, however, differentiation of arterial from venous irrigations is still an open problem. Indeed, automatic separation of arterial and venous trees has been considered during last years as one of the main future challenges in the field. Artery-Vein (AV) segmentation would be useful in different medical scenarios and multiple pulmonary diseases or pathological states, allowing the study of arterial and venous irrigations separately. Features such as density, geometry, topology and size of vessels could be analyzed in diseases that imply vasculature remodeling, making even possible the discovery of new specific biomarkers that remain hidden nowadays. Differentiation between arteries and veins could also enhance or improve methods processing pulmonary structures. Nevertheless, AV segmentation has been unfeasible until now in clinical routine despite its objective usefulness. The huge complexity of pulmonary vascular trees makes a manual segmentation of both structures unfeasible in realistic time, encouraging the design of automatic or semiautomatic tools to perform the task. However, this lack of proper labeled cases seriously limits in the development of AV segmentation systems, where reference standards are necessary in both algorithm training and validation stages. For that reason, the design of synthetic CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image (including arteries and veins) is differentiated unequivocally. In this Ph.D. Thesis we address both interrelated problems. First, the design of a complete framework to automatically generate computational CT phantoms of the human lung is described. Starting from biological and imagebased knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. These synthetic cases, together with labeled real CT datasets, have been used as reference for the development of a fully automatic pulmonary AV segmentation/separation method. The approach comprises a vessel extraction stage using scale-space particles and their posterior artery-vein classification using Graph-Cuts (GC) based on arterial/venous similarity scores obtained with a Machine Learning (ML) pre-classification step and particle connectivity information. Validation of pulmonary phantoms from visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems, show good correspondence between real and synthetic lungs. The evaluation of the Artery-Vein (AV) segmentation algorithm, based on different strategies to assess the accuracy of vessel particles classification, reveal accurate differentiation between arteries and vein in both real and synthetic cases that open a huge range of possibilities in the clinical study of cardiopulmonary diseases and the development of methodological approaches for the analysis of pulmonary images.
Resumo:
In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Influential models of edge detection have generally supposed that an edge is detected at peaks in the 1st derivative of the luminance profile, or at zero-crossings in the 2nd derivative. However, when presented with blurred triangle-wave images, observers consistently marked edges not at these locations, but at peaks in the 3rd derivative. This new phenomenon, termed ‘Mach edges’ persisted when a luminance ramp was added to the blurred triangle-wave. Modelling of these Mach edge detection data required the addition of a physiologically plausible filter, prior to the 3rd derivative computation. A viable alternative model was examined, on the basis of data obtained with short-duration, high spatial-frequency stimuli. Detection and feature-making methods were used to examine the perception of Mach bands in an image set that spanned a range of Mach band detectabilities. A scale-space model that computed edge and bar features in parallel provided a better fit to the data than 4 competing models that combined information across scale in a different manner, or computed edge or bar features at a single scale. The perception of luminance bars was examined in 2 experiments. Data for one image-set suggested a simple rule for perception of a small Gaussian bar on a larger inverted Gaussian bar background. In previous research, discriminability (d’) has typically been reported to be a power function of contrast, where the exponent (p) is 2 to 3. However, using bar, grating, and Gaussian edge stimuli, with several methodologies, values of p were obtained that ranged from 1 to 1.7 across 6 experiments. This novel finding was explained by appealing to low stimulus uncertainty, or a near-linear transducer.
Resumo:
Ernst Mach observed that light or dark bands could be seen at abrupt changes of luminance gradient in the absence of peaks or troughs in luminance. Many models of feature detection share the idea that bars, lines, and Mach bands are found at peaks and troughs in the output of even-symmetric spatial filters. Our experiments assessed the appearance of Mach bands (position and width) and the probability of seeing them on a novel set of generalized Gaussian edges. Mach band probability was mainly determined by the shape of the luminance profile and increased with the sharpness of its corners, controlled by a single parameter (n). Doubling or halving the size of the images had no significant effect. Variations in contrast (20%-80%) and duration (50-300 ms) had relatively minor effects. These results rule out the idea that Mach bands depend simply on the amplitude of the second derivative, but a multiscale model, based on Gaussian-smoothed first- and second-derivative filtering, can account accurately for the probability and perceived spatial layout of the bands. A key idea is that Mach band visibility depends on the ratio of second- to first-derivative responses at peaks in the second-derivative scale-space map. This ratio is approximately scale-invariant and increases with the sharpness of the corners of the luminance ramp, as observed. The edges of Mach bands pose a surprisingly difficult challenge for models of edge detection, but a nonlinear third-derivative operation is shown to predict the locations of Mach band edges strikingly well. Mach bands thus shed new light on the role of multiscale filtering systems in feature coding. © 2012 ARVO.
Resumo:
A sediment core from Reykjanes Ridge has been studied at 10- to 50-year time resolution to document variability of Holocene surface water conditions in the western North Atlantic and to evaluate effects of Holocene ice-rafting episodes. Diatom assemblages are converted to quantitative sea surface temperatures (SST) using three different transfer functions. Spectral and scale-space methods are also applied on the records to explore variability at different timescales. Diatom assemblage and SST records clearly show that decaying remnants of the Laurentide ice sheet strongly influenced early Holocene climate in the western North Atlantic. This overrode the predominance of Milankovitch forcing, which played a key role in the development of Holocene climate in the eastern North Atlantic and Nordic Seas. Superimposed on general Holocene climate change is high-frequency SST variability on the order of 1°-3°C. The record also documents climatic oscillations with 600- to 1000-, ~1500-, and 2500-year periodicities, with a time-dependent dominance of different periodicities through the Holocene; a clear change in variability occurred about 5 ka BP. The SST record also provides evidence for Holocene cooling events (HCE) that, in some cases, correlate to documented southward intrusions of ice into the North Atlantic.