95 resultados para Biomedical Image Processing
Resumo:
Digital techniques have been developed and validated to assess semiquantitatively immunohistochemical nuclear staining. Currently visual classification is the standard for qualitative nuclear evaluation. Analysis of pixels that represents the immunohistochemical labeling can be more sensitive, reproducible and objective than visual grading. This study compared two semiquantitative techniques of digital image analysis with three techniques of visual analysis imaging to estimate the p53 nuclear immunostaining. Methods: Sixty-three sun-exposed forearm-skin biopsies were photographed and submitted to three visual analyses of images: the qualitative visual evaluation method (0 to 4 +), the percentage of labeled nuclei and HSCORE. Digital image analysis was performed using ImageJ 1.45p; the density of nuclei was scored per ephitelial area (DensNU) and the pixel density was established in marked suprabasal epithelium (DensPSB). Results: Statistical significance was found in: the agreement and correlation among the visual estimates of evaluators, correlation among the median visual score of the evaluators, the HSCORE and the percentage of marked nuclei with the DensNU and DensPSB estimates. DensNU was strongly correlated to the percentage of p53-marked nuclei in the epidermis, and DensPSB with the HSCORE. Conclusion: The parameters presented herein can be applied in routine analysis of immunohistochemical nuclear staining of epidermis. © 2012 John Wiley & Sons A/S.
Resumo:
The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols (outline only and all-boundary lines).Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %.The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24).During the designing of a virtual 3D reconstruction, both outline only and all-boundary lines segmentation protocols can be used.Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.
Resumo:
The aim of this study was to evaluate the influence of digitization parameters on periapical radiographic image quality, with regard to anatomic landmarks. Digitized images (n = 160) were obtained using a flatbed scanner with resolutions of 300, 600 and 2400 dpi. The radiographs of 2400 dpi were decreased to 300 and 600 dpi before storage. Digitizations were performed with and without black masking using 8-bit and 16-bit grayscale and saved in TIFF format. Four anatomic landmarks were classified by two observers (very good, good, moderate, regular, poor), in two random sessions. Intraobserver and interobserver agreements were evaluated by Kappa statistics. Inter and intraobserver agreements ranged according to the anatomic landmarks and resolution used. The results obtained demonstrated that the cement enamel junction was the anatomic landmark that presented the poorest concordance. The use of black masking provided better results in the digitized image. The use of a mask to cover radiographs during digitization is necessary. Therefore, the concordance ranged from regular to moderate for the intraobserver evaluation and concordance ranged from regular to poor for interobserver evaluation.
Resumo:
The human dentition is naturally translucent, opalescent and fluorescent. Differences between the level of fluorescence of tooth structure and restorative materials may result in distinct metameric properties and consequently perceptible disparate esthetic behavior, which impairs the esthetic result of the restorations, frustrating both patients and staff. In this study, we evaluated the level of fluorescence of different composites (Durafill in tones A2 (Du), Charisma in tones A2 (Ch), Venus in tone A2 (Ve), Opallis enamel and dentin in tones A2 (OPD and OPE), Point 4 in tones A2 (P4), Z100 in tones A2 ( Z1), Z250 in tones A2 (Z2), Te-Econom in tones A2 (TE), Tetric Ceram in tones A2 (TC), Tetric Ceram N in tones A1, A2, A4 (TN1, TN2, TN4), Four seasons enamel and dentin in tones A2 (and 4SD 4SE), Empress Direct enamel and dentin in tones A2 (EDE and EDD) and Brilliant in tones A2 (Br)). Cylindrical specimens were prepared, coded and photographed in a standardized manner with a Canon EOS digital camera (400 ISO, 2.8 aperture and 1/ 30 speed), in a dark environment under the action of UV light (25 W). The images were analyzed with the software ScanWhite©-DMC/Darwin systems. The results showed statistical differences between the groups (p < 0.05), and between these same groups and the average fluorescence of the dentition of young (18 to 25 years) and adults (40 to 45 years) taken as control. It can be concluded that: Composites Z100, Z250 (3M ESPE) and Point 4 (Kerr) do not match with the fluorescence of human dentition and the fluorescence of the materials was found to be affected by their own tone.
Resumo:
Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.
Resumo:
Image segmentation is a process frequently used in several different areas including Cartography. Feature extraction is a very troublesome task, and successful results require more complex techniques and good quality data. The aims of this paper is to study Digital Image Processing techniques, with emphasis in Mathematical Morphology, to use Remote Sensing imagery, making image segmentation, using morphological operators, mainly the multi-scale morphological gradient operator. In the segmentation process, pre-processing operators of Mathematical Morphology were used, and the multi-scales gradient was implemented to create one of the images used as marker image. Orbital image of the Landsat satellite, sensor TM was used. The MATLAB software was used in the implementation of the routines. With the accomplishment of tests, the performance of the implemented operators was verified and carried through the analysis of the results. The extration of linear feature, using mathematical morphology techniques, can contribute in cartographic applications, as cartographic products updating. The comparison to the best result obtained was performed by means of the morphology with conventional techniques of features extraction. © Springer-Verlag 2004.
Resumo:
In this paper, we described how a multidimensional wavelet neural networks based on Polynomial Powers of Sigmoid (PPS) can be constructed, trained and applied in image processing tasks. In this sense, a novel and uniform framework for face verification is presented. The framework is based on a family of PPS wavelets,generated from linear combination of the sigmoid functions, and can be considered appearance based in that features are extracted from the face image. The feature vectors are then subjected to subspace projection of PPS-wavelet. The design of PPS-wavelet neural networks is also discussed, which is seldom reported in the literature. The Stirling Universitys face database were used to generate the results. Our method has achieved 92 % of correct detection and 5 % of false detection rate on the database.
Resumo:
A method has been developed to obtain quantitative information about grain size and shape from fractured surfaces of ceramic materials. One elaborated a routine to split intergranular and transgranular grains facets of ceramic fracture surfaces by digital image processing. A commercial ceramic (ALCOA A-16, Al2O3-1.5% of CrO) was used to test the proposed method. Microstructural measurements of grain shape and size taken from fracture surfaces have been compared through descriptive statistics of distributions, with the corresponding measurements from polished and etched surfaces. The agreement between results, with the expected bias on grain size values from fractures, obtained for both types of surfaces allowed to infer that this new technique can be used to extract the relevant microstructural information from fractured surfaces, thus minimising the time consuming steps of sample preparation. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This article describes the development of a method for analysis of the shape of the stretch zone surface based on parallax measurement theory and using digital image processing techniques. Accurate criteria for the definition of the boundaries of the stretch zone are established from profiles of fracture surfaces obtained from crack tip opening displacement tests on Al-7050 alloy samples. The elevation profiles behavior analysis is based on stretch zone width and height parameters. It is concluded that the geometry of the stretch zone profiles under plane strain conditions can be described by a semi-parabolic relationship. (C) Elsevier B.V., 1999. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Digital image processing is a field that demands great processing capacity. As such it becomes relevant to implement software that is based on the distribution of the processing into several nodes divided by computers belonging to the same network. Specifically discussed in this work are distributed algorithms of compression and expansion of images using the discrete cosine transform. The results show that the savings in processing time obtained due to the parallel algorithms in comparison to its sequential equivalents is a function that depends on the resolution of the image and the complexity of the involved calculation; that is efficiency is greater the longer the processing period is in terms of the time involved for the communication between the network points.
Resumo:
Purpose: To evaluate reproducibility and precision of ocular measurements by digital photograph analysis, in addition to the transformation of the measures according to the individual iris diameter as an oculometric reference. Methods: Twenty-four eyes have been digitally photographed in a standardized way at two distances. Two researchers have analyzed these printed images using a caliper and these digital forms by ImageJ 1.37 (TM). Several external ocular parameters were estimated (mm and as iris diameter) and methods of measurement compared regarding their precision, agreement and correlation. Results: Caliper and digital analysis of oculometric measures provided significant agreement and correlation, nevertheless the precision of digital measures was higher. The estimates of numeric transformation from oculometric measures according to individual iris diameter resulted in great correlation to caliper measures and high agreement when compared to different distances of taking the photographs. Conclusions: Facial digital photographs allowed oculometric precise and reproducible estimates, endorsing clinical research usefulness. Using iris diameter as individual oculometric reference disclosed high reproducibility when facial photographs were taken at different distances.
Resumo:
OBJETIVO: Avaliar o desempenho da análise de imagem digital na estimativa da área acometida pelas úlceras crônicas dos membros inferiores. MÉTODOS: Estudo prospectivo em que foram mensuradas úlceras empregando o método planimétrico clássico, utilizando desenho dos seus contornos em filme plástico transparente, medida sua área posteriormente por folha milimetrada. Esses valores foram utilizados como padrão para a comparação com a estimativa de área pelas fotografias digitais padronizadas das úlceras e dos desenhos das mesmas em filme plástico. Para criar um referencial de conversão dos pixels em milímetros, foi empregado um adesivo com tamanho conhecido, adjacente à úlcera. RESULTADOS: foram avaliadas 42 lesões em 20 pacientes portadores de úlceras crônicas de membros inferiores. As áreas das úlceras variaram de 0,24 a 101,65cm². Observou-se forte correlação entre as medidas planimétricas e as fotos das úlceras (R²=0,86 p<0,01), porém a correlação das medidas planimétricas com as fotos digitais dos desenhos das úlceras foi ainda maior (R²=0,99 p<0,01). CONCLUSÃO: A fotografia digital padronizada revelou-se método rápido, preciso e não-invasivo capaz de estimar a área afetada por úlceras. A avaliação das medidas fotográficas dos contornos das úlceras deve ser preferida em relação à análise de sua fotografia direta.