701 resultados para NIH Image macros
Resumo:
Multidimensional Visualization techniques are invaluable tools for analysis of structured and unstructured data with variable dimensionality. This paper introduces PEx-Image-Projection Explorer for Images-a tool aimed at supporting analysis of image collections. The tool supports a methodology that employs interactive visualizations to aid user-driven feature detection and classification tasks, thus offering improved analysis and exploration capabilities. The visual mappings employ similarity-based multidimensional projections and point placement to layout the data on a plane for visual exploration. In addition to its application to image databases, we also illustrate how the proposed approach can be successfully employed in simultaneous analysis of different data types, such as text and images, offering a common visual representation for data expressed in different modalities.
Resumo:
Successful classification, information retrieval and image analysis tools are intimately related with the quality of the features employed in the process. Pixel intensities, color, texture and shape are, generally, the basis from which most of the features are Computed and used in such fields. This papers presents a novel shape-based feature extraction approach where an image is decomposed into multiple contours, and further characterized by Fourier descriptors. Unlike traditional approaches we make use of topological knowledge to generate well-defined closed contours, which are efficient signatures for image retrieval. The method has been evaluated in the CBIR context and image analysis. The results have shown that the multi-contour decomposition, as opposed to a single shape information, introduced a significant improvement in the discrimination power. (c) 2008 Elsevier B.V. All rights reserved,
Resumo:
Traditional content-based image retrieval (CBIR) systems use low-level features such as colors, shapes, and textures of images. Although, users make queries based on semantics, which are not easily related to such low-level characteristics. Recent works on CBIR confirm that researchers have been trying to map visual low-level characteristics and high-level semantics. The relation between low-level characteristics and image textual information has motivated this article which proposes a model for automatic classification and categorization of words associated to images. This proposal considers a self-organizing neural network architecture, which classifies textual information without previous learning. Experimental results compare the performance results of the text-based approach to an image retrieval system based on low-level features. (c) 2008 Wiley Periodicals, Inc.
Resumo:
Texture is one of the most important visual attributes used in image analysis. It is used in many content-based image retrieval systems, where it allows the identification of a larger number of images from distinct origins. This paper presents a novel approach for image analysis and retrieval based on complexity analysis. The approach consists of a texture segmentation step, performed by complexity analysis through BoxCounting fractal dimension, followed by the estimation of complexity of each computed region by multiscale fractal dimension. Experiments have been performed with MRI database in both pattern recognition and image retrieval contexts. Results show the accuracy of the method and also indicate how the performance changes as the texture segmentation process is altered.
Resumo:
This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.
Resumo:
Although the oral cavity is easily accessible to inspection, patients with oral cancer most often present at a late stage, leading to high morbidity and mortality. Autofluorescence imaging has emerged as a promising technology to aid clinicians in screening for oral neoplasia and as an aid to resection, but current approaches rely on subjective interpretation. We present a new method to objectively delineate neoplastic oral mucosa using autofluorescence imaging. Autofluorescence images were obtained from 56 patients with oral lesions and 11 normal volunteers. From these images, 276 measurements from 159 unique regions of interest (ROI) sites corresponding to normal and confirmed neoplastic areas were identified. Data from ROIs in the first 46 subjects were used to develop a simple classification algorithm based on the ratio of red-to-green fluorescence; performance of this algorithm was then validated using data from the ROIs in the last 21 subjects. This algorithm was applied to patient images to create visual disease probability maps across the field of view. Histologic sections of resected tissue were used to validate the disease probability maps. The best discrimination between neoplastic and nonneoplastic areas was obtained at 405 nm excitation; normal tissue could be discriminated from dysplasia and invasive cancer with a 95.9% sensitivity and 96.2% specificity in the training set, and with a 100% sensitivity and 91.4% specificity in the validation set. Disease probability maps qualitatively agreed with both clinical impression and histology. Autofluorescence imaging coupled with objective image analysis provided a sensitive and noninvasive tool for the detection of oral neoplasia.
Resumo:
This paper presents a study of AISI 1040 steel corrosion in aqueous electrolyte of acetic acid buffer containing 3.1 and 31 x 10(-3) mol dm(-3) of Na(2)S in both the presence and absence of 3.5 wt.% NaCl. This investigation of steel corrosion was carried out using potential polarization, and open-circuit and in situ optical microscopy. The morphological analysis and classification of types of surface corrosion damage by digital image processing reveals grain boundary corrosion and shows a non-uniform sulfide film growth, which occurs preferentially over pearlitic grains through successive formation and dissolution of the film. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
An entropy-based image segmentation approach is introduced and applied to color images obtained from Google Earth. Segmentation refers to the process of partitioning a digital image in order to locate different objects and regions of interest. The application to satellite images paves the way to automated monitoring of ecological catastrophes, urban growth, agricultural activity, maritime pollution, climate changing and general surveillance. Regions representing aquatic, rural and urban areas are identified and the accuracy of the proposed segmentation methodology is evaluated. The comparison with gray level images revealed that the color information is fundamental to obtain an accurate segmentation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Purpose: We present an iterative framework for CT reconstruction from transmission ultrasound data which accurately and efficiently models the strong refraction effects that occur in our target application: Imaging the female breast. Methods: Our refractive ray tracing framework has its foundation in the fast marching method (FNMM) and it allows an accurate as well as efficient modeling of curved rays. We also describe a novel regularization scheme that yields further significant reconstruction quality improvements. A final contribution is the development of a realistic anthropomorphic digital breast phantom based on the NIH Visible Female data set. Results: Our system is able to resolve very fine details even in the presence of significant noise, and it reconstructs both sound speed and attenuation data. Excellent correspondence with a traditional, but significantly more computationally expensive wave equation solver is achieved. Conclusions: Apart from the accurate modeling of curved rays, decisive factors have also been our regularization scheme and the high-quality interpolation filter we have used. An added benefit of our framework is that it accelerates well on GPUs where we have shown that clinical 3D reconstruction speeds on the order of minutes are possible.
Resumo:
The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Modern medical imaging techniques enable the acquisition of in vivo high resolution images of the vascular system. Most common methods for the detection of vessels in these images, such as multiscale Hessian-based operators and matched filters, rely on the assumption that at each voxel there is a single cylinder. Such an assumption is clearly violated at the multitude of branching points that are easily observed in all, but the Most focused vascular image studies. In this paper, we propose a novel method for detecting vessels in medical images that relaxes this single cylinder assumption. We directly exploit local neighborhood intensities and extract characteristics of the local intensity profile (in a spherical polar coordinate system) which we term as the polar neighborhood intensity profile. We present a new method to capture the common properties shared by polar neighborhood intensity profiles for all the types of vascular points belonging to the vascular system. The new method enables us to detect vessels even near complex extreme points, including branching points. Our method demonstrates improved performance over standard methods on both 2D synthetic images and 3D animal and clinical vascular images, particularly close to vessel branching regions. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The objective of this thesis has been to investigate the approval process for an image. This investigation has been carriedout at four catalog-producing companies and three companies working with repro or printing. The information wasgathered through interviews and surveys and later used for evaluation. The result of the evaluation has shown that allbusinesses are very good at technical aspects but also that the biggest problem they have is with the communication. Theconclusion is that businesses need a clear construction for the image process. This will minimize the communicationproblems and make the process effective.
Resumo:
Étude de l'homme noir dans la littérature postcoloniale. Comment cet homme se voit-il dans le roman "La rue Cases-Nègres"?