84 resultados para Image pre-processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Huge image collections are becoming available lately. In this scenario, the use of Content-Based Image Retrieval (CBIR) systems has emerged as a promising approach to support image searches. The objective of CBIR systems is to retrieve the most similar images in a collection, given a query image, by taking into account image visual properties such as texture, color, and shape. In these systems, the effectiveness of the retrieval process depends heavily on the accuracy of ranking approaches. Recently, re-ranking approaches have been proposed to improve the effectiveness of CBIR systems by taking into account the relationships among images. The re-ranking approaches consider the relationships among all images in a given dataset. These approaches typically demands a huge amount of computational power, which hampers its use in practical situations. On the other hand, these methods can be massively parallelized. In this paper, we propose to speedup the computation of the RL-Sim algorithm, a recently proposed image re-ranking approach, by using the computational power of Graphics Processing Units (GPU). GPUs are emerging as relatively inexpensive parallel processors that are becoming available on a wide range of computer systems. We address the image re-ranking performance challenges by proposing a parallel solution designed to fit the computational model of GPUs. We conducted an experimental evaluation considering different implementations and devices. Experimental results demonstrate that significant performance gains can be obtained. Our approach achieves speedups of 7x from serial implementation considering the overall algorithm and up to 36x on its core steps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fracture surfaces are the fracture process marks, taht it is characterized by energy release guieded by failure mode. The fracture toughness express this energy em stress and strain terms in pre-cracked samples. The strectch zone is the characteristic region forms by the transition of fatigue fracture and final fracture and it width demonstrate the relation with failure energy release.The quantitative fractography is a broadly tool uses in failure surfaces characterization that it can point to a material’s aspect or a fracture process. The image processing works like an investigation tool, guinding a lot of studies in this area. In order to evaluate the characterization effectivity and it respectivity studies, it used 300M steel that it was thermal treated by an aeronautical process known and it characterized by tensile test and energy dispersive spectroscopy (EDS). The tensile test of this material, made by ASTM E8, allowed the head treatment effectivity confirmation, beyond of mechanics porperties determination. The EDS confirmed the material composition, beyond of base the discussion about fracture mechanism presence. The fracture toughness test has also made, that it works to obtain the fracture surfeaces studies below self-similarity and self-affinity approaches. In front of all the exposed it was possible to conclude that the fractal dimension works like a study parameter of fracture process, allowinf the relation of their values with changes in thickness, which interferes directly in material’s behaviour in fracture toughness approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate the influence of digitization parameters on periapical radiographic image quality, with regard to anatomic landmarks. Digitized images (n = 160) were obtained using a flatbed scanner with resolutions of 300, 600 and 2400 dpi. The radiographs of 2400 dpi were decreased to 300 and 600 dpi before storage. Digitizations were performed with and without black masking using 8-bit and 16-bit grayscale and saved in TIFF format. Four anatomic landmarks were classified by two observers (very good, good, moderate, regular, poor), in two random sessions. Intraobserver and interobserver agreements were evaluated by Kappa statistics. Inter and intraobserver agreements ranged according to the anatomic landmarks and resolution used. The results obtained demonstrated that the cement enamel junction was the anatomic landmark that presented the poorest concordance. The use of black masking provided better results in the digitized image. The use of a mask to cover radiographs during digitization is necessary. Therefore, the concordance ranged from regular to moderate for the intraobserver evaluation and concordance ranged from regular to poor for interobserver evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this study was to evaluate the increase of the cervical area and dentin thickness in mesial and distal walls of the mesial canals from mandibular molars after the use of LA Axxess (LA), CP Drill (CP) and Gates-Glidden (GG) rotary instruments. Material and Methods: Sixty root canals from thirty mandibular first molar were sectioned 3 mm below the cement-enamel junction, divided in 3 groups (n = 20 root canals, each) according to rotary instrument used, and the cervical images were captured before and after pre-enlargement instrumentation. The increase of the instrumented cervical area (mm2) and the dentin removal thickness (mm), at mesial and distal walls were calculated using Image tools software, by comparison of images. Data were analyzed by ANOVA and Tukey tests (p=0.05). Results: All rotary instruments promoted thickness reduction in dentin walls. In mesial wall, all rotary instruments promoted similar thickness reduction of dentinal wall and did not differ from each other (p>0.05). In distal wall, LA Axxess instrument promoted higher dentin thickness reduction than other groups (p<0.05). The three rotary instruments promoted different increase at the instrumented cervical area (p<0.05), LA promoted the highest increase area and GG and CP presented similar results. Conclusion: LA 20/0.06 promoted the highest thickness reduction in distal wall and increase of cervical area of root canal. On the other hand, CP was the safest instrument, with lower dentin removal of distal wall and similar increased area to GG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human dentition is naturally translucent, opalescent and fluorescent. Differences between the level of fluorescence of tooth structure and restorative materials may result in distinct metameric properties and consequently perceptible disparate esthetic behavior, which impairs the esthetic result of the restorations, frustrating both patients and staff. In this study, we evaluated the level of fluorescence of different composites (Durafill in tones A2 (Du), Charisma in tones A2 (Ch), Venus in tone A2 (Ve), Opallis enamel and dentin in tones A2 (OPD and OPE), Point 4 in tones A2 (P4), Z100 in tones A2 ( Z1), Z250 in tones A2 (Z2), Te-Econom in tones A2 (TE), Tetric Ceram in tones A2 (TC), Tetric Ceram N in tones A1, A2, A4 (TN1, TN2, TN4), Four seasons enamel and dentin in tones A2 (and 4SD 4SE), Empress Direct enamel and dentin in tones A2 (EDE and EDD) and Brilliant in tones A2 (Br)). Cylindrical specimens were prepared, coded and photographed in a standardized manner with a Canon EOS digital camera (400 ISO, 2.8 aperture and 1/ 30 speed), in a dark environment under the action of UV light (25 W). The images were analyzed with the software ScanWhite©-DMC/Darwin systems. The results showed statistical differences between the groups (p < 0.05), and between these same groups and the average fluorescence of the dentition of young (18 to 25 years) and adults (40 to 45 years) taken as control. It can be concluded that: Composites Z100, Z250 (3M ESPE) and Point 4 (Kerr) do not match with the fluorescence of human dentition and the fluorescence of the materials was found to be affected by their own tone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.