20 resultados para fractal segmentation
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Color texture classification is an important step in image segmentation and recognition. The color information is especially important in textures of natural scenes, such as leaves surfaces, terrains models, etc. In this paper, we propose a novel approach based on the fractal dimension for color texture analysis. The proposed approach investigates the complexity in R, G and B color channels to characterize a texture sample. We also propose to study all channels in combination, taking into consideration the correlations between them. Both these approaches use the volumetric version of the Bouligand-Minkowski Fractal Dimension method. The results show a advantage of the proposed method over other color texture analysis methods. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Resumo:
This work proposes the development and study of a novel technique lot the generation of fractal descriptors used in texture analysis. The novel descriptors are obtained from a multiscale transform applied to the Fourier technique of fractal dimension calculus. The power spectrum of the Fourier transform of the image is plotted against the frequency in a log-log scale and a multiscale transform is applied to this curve. The obtained values are taken as the fractal descriptors of the image. The validation of the proposal is performed by the use of the descriptors for the classification of a dataset of texture images whose real classes are previously known. The classification precision is compared to other fractal descriptors known in the literature. The results confirm the efficiency of the proposed method. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The present work shows a novel fractal dimension method for shape analysis. The proposed technique extracts descriptors from a shape by applying a multi-scale approach to the calculus of the fractal dimension. The fractal dimension is estimated by applying the curvature scale-space technique to the original shape. By applying a multi-scale transform to the calculus, we obtain a set of descriptors which is capable of describing the shape under investigation with high precision. We validate the computed descriptors in a classification process. The results demonstrate that the novel technique provides highly reliable descriptors, confirming the efficiency of the proposed method. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4757226]
Resumo:
Aims. We studied four young star clusters to characterise their anomalous extinction or variable reddening and asses whether they could be due to contamination by either dense clouds or circumstellar effects. Methods. We evaluated the extinction law (R-V) by adopting two methods: (i) the use of theoretical expressions based on the colour-excess of stars with known spectral type; and (ii) the analysis of two-colour diagrams, where the slope of the observed colour distribution was compared to the normal distribution. An algorithm to reproduce the zero-age main-sequence (ZAMS) reddened colours was developed to derive the average visual extinction (A(V)) that provides the closest fit to the observational data. The structure of the clouds was evaluated by means of a statistical fractal analysis, designed to compare their geometric structure with the spatial distribution of the cluster members. Results. The cluster NGC 6530 is the only object of our sample affected by anomalous extinction. On average, the other clusters suffer normal extinction, but several of their members, mainly in NGC 2264, seem to have high R-V, probably because of circumstellar effects. The ZAMS fitting provides A(V) values that are in good agreement with those found in the literature. The fractal analysis shows that NGC 6530 has a centrally concentrated distribution of stars that differs from the substructures found in the density distribution of the cloud projected in the A(V) map, suggesting that the original cloud was changed by the cluster formation. However, the fractal dimension and statistical parameters of Berkeley 86, NGC 2244, and NGC 2264 indicate that there is a good cloud-cluster correlation, when compared to other works based on an artificial distribution of points.
Resumo:
This paper is dedicated to estimate the fractal dimension of exponential global attractors of some generalized gradient-like semigroups in a general Banach space in terms of the maximum of the dimension of the local unstable manifolds of the isolated invariant sets, Lipschitz properties of the semigroup and the rate of exponential attraction. We also generalize this result for some special evolution processes, introducing a concept of Morse decomposition with pullback attractivity. Under suitable assumptions, if (A, A*) is an attractor-repeller pair for the attractor A of a semigroup {T(t) : t >= 0}, then the fractal dimension of A can be estimated in terms of the fractal dimension of the local unstable manifold of A*, the fractal dimension of A, the Lipschitz properties of the semigroup and the rate of the exponential attraction. The ingredients of the proof are the notion of generalized gradient-like semigroups and their regular attractors, Morse decomposition and a fine analysis of the structure of the attractors. As we said previously, we generalize this result for some evolution processes using the same basic ideas. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
Bilayer segmentation of live video in uncontrolled environments is an essential task for home applications in which the original background of the scene must be replaced, as in videochats or traditional videoconference. The main challenge in such conditions is overcome all difficulties in problem-situations (e. g., illumination change, distract events such as element moving in the background and camera shake) that may occur while the video is being captured. This paper presents a survey of segmentation methods for background substitution applications, describes the main concepts and identifies events that may cause errors. Our analysis shows that although robust methods rely on specific devices (multiple cameras or sensors to generate depth maps) which aid the process. In order to achieve the same results using conventional devices (monocular video cameras), most current research relies on energy minimization frameworks, in which temporal and spacial information are probabilistically combined with those of color and contrast.
Resumo:
Texture image analysis is an important field of investigation that has attracted the attention from computer vision community in the last decades. In this paper, a novel approach for texture image analysis is proposed by using a combination of graph theory and partially self-avoiding deterministic walks. From the image, we build a regular graph where each vertex represents a pixel and it is connected to neighboring pixels (pixels whose spatial distance is less than a given radius). Transformations on the regular graph are applied to emphasize different image features. To characterize the transformed graphs, partially self-avoiding deterministic walks are performed to compose the feature vector. Experimental results on three databases indicate that the proposed method significantly improves correct classification rate compared to the state-of-the-art, e.g. from 89.37% (original tourist walk) to 94.32% on the Brodatz database, from 84.86% (Gabor filter) to 85.07% on the Vistex database and from 92.60% (original tourist walk) to 98.00% on the plant leaves database. In view of these results, it is expected that this method could provide good results in other applications such as texture synthesis and texture segmentation. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Fractal theory presents a large number of applications to image and signal analysis. Although the fractal dimension can be used as an image object descriptor, a multiscale approach, such as multiscale fractal dimension (MFD), increases the amount of information extracted from an object. MFD provides a curve which describes object complexity along the scale. However, this curve presents much redundant information, which could be discarded without loss in performance. Thus, it is necessary the use of a descriptor technique to analyze this curve and also to reduce the dimensionality of these data by selecting its meaningful descriptors. This paper shows a comparative study among different techniques for MFD descriptors generation. It compares the use of well-known and state-of-the-art descriptors, such as Fourier, Wavelet, Polynomial Approximation (PA), Functional Data Analysis (FDA), Principal Component Analysis (PCA), Symbolic Aggregate Approximation (SAX), kernel PCA, Independent Component Analysis (ICA), geometrical and statistical features. The descriptors are evaluated in a classification experiment using Linear Discriminant Analysis over the descriptors computed from MFD curves from two data sets: generic shapes and rotated fish contours. Results indicate that PCA, FDA, PA and Wavelet Approximation provide the best MFD descriptors for recognition and classification tasks. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation.
Resumo:
Background: Prostate cancer is a serious public health problem that affects quality of life and has a significant mortality rate. The aim of the present study was to quantify the fractal dimension and Shannon’s entropy in the histological diagnosis of prostate cancer. Methods: Thirty-four patients with prostate cancer aged 50 to 75 years having been submitted to radical prostatectomy participated in the study. Histological slides of normal (N), hyperplastic (H) and tumor (T) areas of the prostate were digitally photographed with three different magnifications (40x, 100x and 400x) and analyzed. The fractal dimension (FD), Shannon’s entropy (SE) and number of cell nuclei (NCN) in these areas were compared. Results: FD analysis demonstrated the following significant differences between groups: T vs. N and H vs. N groups (p < 0.05) at a magnification of 40x; T vs. N (p < 0.01) at 100x and H vs. N (p < 0.01) at 400x. SE analysis revealed the following significant differences groups: T vs. H and T vs. N (p < 0.05) at 100x; and T vs. H and T vs. N (p < 0.001) at 400x. NCN analysis demonstrated the following significant differences between groups: T vs. H and T vs. N (p < 0.05) at 40x; T vs. H and T vs. N (p < 0.0001) at 100x; and T vs. H and T vs. N (p < 0.01) at 400x. Conclusions: The quantification of the FD and SE, together with the number of cell nuclei, has potential clinical applications in the histological diagnosis of prostate cancer.
Resumo:
OBJECTIVE: To propose an automatic brain tumor segmentation system. METHODS: The system used texture characteristics as its main source of information for segmentation. RESULTS: The mean correct match was 94% of correspondence between the segmented areas and ground truth. CONCLUSION: Final results showed that the proposed system was able to find and delimit tumor areas without requiring any user interaction.
Resumo:
The parenchymal distribution of the splenic artery was studied in order to obtain anatomical basis for partial splenectomy. Thirty two spleens were studied, 26 spleens of healthy horses weighing 320 to 450kg, aged 3 to 12 years and 6 spleens of fetus removed from slaughterhouse. The spleens were submitted to arteriography and scintigraphy in order to have their vascular pattern examined and compared to the external aspect of the organ aiming establish anatomo-surgical segments. All radiographs were photographed with a digital camera and the digital images were submitted to a measuring system for comparative analysis of areas of dorsal and ventral anatomo-surgical segments. Anatomical investigations into the angioarchitecture of the equine spleen showed a paucivascular area, which coincides with a thinner external area, allowing the organ to be divided in two anatomo-surgical segments of approximately 50% of the organ each.
Resumo:
This work presents a methodology to the morphology analysis and characterization of nanostructured material images acquired from FEG-SEM (Field Emission Gun-Scanning Electron Microscopy) technique. The metrics were extracted from the image texture (mathematical surface) by the volumetric fractal descriptors, a methodology based on the Bouligand-Minkowski fractal dimension, which considers the properties of the Minkowski dilation of the surface points. An experiment with galvanostatic anodic titanium oxide samples prepared in oxalyc acid solution using different conditions of applied current, oxalyc acid concentration and solution temperature was performed. The results demonstrate that the approach is capable of characterizing complex morphology characteristics such as those present in the anodic titanium oxide.