923 resultados para Digital Images


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to introduce a new approach for edge detection in gray shaded images. The proposed approach is based on the fuzzy number theory. The idea is to deal with the uncertainties concerning the gray shades making up the image, and thus calculate the appropriateness of the pixels in relation to an homogeneous region around them. The pixels not belonging to the region are then classified as border pixels. The results have shown that the technique is simple, computationally efficient and with good results when compared with both the traditional border detectors and the fuzzy edge detectors. © 2007 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to introduce a new approach for edge detection in grey shaded images. The proposed approach is based on the fuzzy number theory. The idea is to deal with the uncertainties concerning the grey shades making up the image and, thus, calculate the appropriateness of the pixels in relation to a homogeneous region around them. The pixels not belonging to the region are then classified as border pixels. The results have shown that the technique is simple, computationally efficient and with good results when compared with both the traditional border detectors and the fuzzy edge detectors. Copyright © 2009, Inderscience Publishers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research proposes to apply techniques of Mathematics Morphology to extract highways in digital images of high resolution, targeting the upgrade of cartographic products. Remote Sensing data and Mathematical Morphological techniques were integrated in the process of extraction. Mathematical Morphology's objective is to improve and extract the relevant information of the visual image. In order to test the proposed approach some morphological operators related to preprocess, were applied to the original images. Routines were implemented in the MATLAB environment. Results indicated good performances by the implemented operators. The integration of the technologies aimed to implement the semiautomatic extraction of highways with the purpose to use them in processes of cartographic updating.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The digital image processing has been applied in several areas, especially where it is necessary use tools for feature extraction and to get patterns of the studied images. In an initial stage, the segmentation is used to separate the image in parts that represents a interest object, that may be used in a specific study. There are several methods that intends to perform such task, but is difficult to find a method that can easily adapt to different type of images, that often are very complex or specific. To resolve this problem, this project aims to presents a adaptable segmentation method, that can be applied to different type of images, providing an better segmentation. The proposed method is based in a model of automatic multilevel thresholding and considers techniques of group histogram quantization, analysis of the histogram slope percentage and calculation of maximum entropy to define the threshold. The technique was applied to segment the cell core and potential rejection of tissue in myocardial images of biopsies from cardiac transplant. The results are significant in comparison with those provided by one of the best known segmentation methods available in the literature. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The algorithm creates a buffer area around the cartographic features of interest in one of the images and compare it with the other one. During the comparison, the algorithm calculates the number of equals and different points and uses it to calculate the statistical values of the analysis. One calculated statistical value is the correctness, which shows the user the percentage of points that were correctly extracted. Another one is the completeness that shows the percentage of points that really belong to the interest feature. And the third value shows the idea of quality obtained by the extraction method, since that in order to calculate the quality the algorithm uses the correctness and completeness previously calculated. All the performed tests using this algorithm were possible to use the statistical values calculated to represent quantitatively the quality obtained by the extraction method executed. So, it is possible to say that the developed algorithm can be used to analyze extraction methods of cartographic features of interest, since that the results obtained were promising.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human dentition is naturally translucent, opalescent and fluorescent. Differences between the level of fluorescence of tooth structure and restorative materials may result in distinct metameric properties and consequently perceptible disparate esthetic behavior, which impairs the esthetic result of the restorations, frustrating both patients and staff. In this study, we evaluated the level of fluorescence of different composites (Durafill in tones A2 (Du), Charisma in tones A2 (Ch), Venus in tone A2 (Ve), Opallis enamel and dentin in tones A2 (OPD and OPE), Point 4 in tones A2 (P4), Z100 in tones A2 ( Z1), Z250 in tones A2 (Z2), Te-Econom in tones A2 (TE), Tetric Ceram in tones A2 (TC), Tetric Ceram N in tones A1, A2, A4 (TN1, TN2, TN4), Four seasons enamel and dentin in tones A2 (and 4SD 4SE), Empress Direct enamel and dentin in tones A2 (EDE and EDD) and Brilliant in tones A2 (Br)). Cylindrical specimens were prepared, coded and photographed in a standardized manner with a Canon EOS digital camera (400 ISO, 2.8 aperture and 1/ 30 speed), in a dark environment under the action of UV light (25 W). The images were analyzed with the software ScanWhite©-DMC/Darwin systems. The results showed statistical differences between the groups (p < 0.05), and between these same groups and the average fluorescence of the dentition of young (18 to 25 years) and adults (40 to 45 years) taken as control. It can be concluded that: Composites Z100, Z250 (3M ESPE) and Point 4 (Kerr) do not match with the fluorescence of human dentition and the fluorescence of the materials was found to be affected by their own tone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

STEPanizer is an easy-to-use computer-based software tool for the stereological assessment of digitally captured images from all kinds of microscopical (LM, TEM, LSM) and macroscopical (radiology, tomography) imaging modalities. The program design focuses on providing the user a defined workflow adapted to most basic stereological tasks. The software is compact, that is user friendly without being bulky. STEPanizer comprises the creation of test systems, the appropriate display of digital images with superimposed test systems, a scaling facility, a counting module and an export function for the transfer of results to spreadsheet programs. Here we describe the major workflow of the tool illustrating the application on two examples from transmission electron microscopy and light microscopy, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the fusion of probabilistic knowledge-based classification rules and learning automata theory is proposed and as a result we present a set of probabilistic classification rules with self-learning capability. The probabilities of the classification rules change dynamically guided by a supervised reinforcement process aimed at obtaining an optimum classification accuracy. This novel classifier is applied to the automatic recognition of digital images corresponding to visual landmarks for the autonomous navigation of an unmanned aerial vehicle (UAV) developed by the authors. The classification accuracy of the proposed classifier and its comparison with well-established pattern recognition methods is finally reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A domain independent ICA-based approach to watermarking is presented. This approach can be used on images, music or video to embed either a robust or fragile watermark. In the case of robust watermarking, the method shows high information rate and robustness against malicious and non-malicious attacks, while keeping a low induced distortion. The fragile watermarking scheme, on the other hand, shows high sensitivity to tampering attempts while keeping the requirement for high information rate and low distortion. The improved performance is achieved by employing a set of statistically independent sources (the independent components) as the feature space and principled statistical decoding methods. The performance of the suggested method is compared to other state of the art approaches. The paper focuses on applying the method to digitized images although the same approach can be used for other media, such as music or video.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Identifying new and more robust assessments of proficiency/expertise (finding new "biomarkers of expertise") in histopathology is desirable for many reasons. Advances in digital pathology permit new and innovative tests such as flash viewing tests and eye tracking and slide navigation analyses that would not be possible with a traditional microscope. The main purpose of this study was to examine the usefulness of time-restricted testing of expertise in histopathology using digital images.
Methods: 19 novices (undergraduate medical students), 18 intermediates (trainees), and 19 experts (consultants) were invited to give their opinion on 20 general histopathology cases after 1 s and 10 s viewing times. Differences in performance between groups were measured and the internal reliability of the test was calculated.
Results: There were highly significant differences in performance between the groups using the Fisher's least significant difference method for multiple comparisons. Differences between groups were consistently greater in the 10-s than the 1-s test. The Kuder-Richardson 20 internal reliability coefficients were very high for both tests: 0.905 for the 1-s test and 0.926 for the 10-s test. Consultants had levels of diagnostic accuracy of 72% at 1 s and 83% at 10 s.
Conclusions: Time-restricted tests using digital images have the potential to be extremely reliable tests of diagnostic proficiency in histopathology. A 10-s viewing test may be more reliable than a 1-s test. Over-reliance on "at a glance" diagnoses in histopathology is a potential source of medical error due to over-confidence bias and premature closure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Quantitative determination of modification of primary sediment features, by the activity of organisms (i.e., bioturbation) is essential in geosciences. Some methods proposed since the 1960s are mainly based on visual or subjective determinations. The first semiquantitative evaluations of the Bioturbation Index, Ichnofabric Index, or the amount of bioturbation were attempted, in the best cases using a series of flashcards designed in different situations. Recently, more effective methods involve the use of analytical and computational methods such as X-rays, magnetic resonance imaging or computed tomography; these methods are complex and often expensive. This paper presents a compilation of different methods, using Adobe® Photoshop® software CS6, for digital estimation that are a part of the IDIAP (Ichnological Digital Analysis Images Package), which is an inexpensive alternative to recently proposed methods, easy to use, and especially recommended for core samples. The different methods — “Similar Pixel Selection Method (SPSM)”, “Magic Wand Method (MWM)” and the “Color Range Selection Method (CRSM)” — entail advantages and disadvantages depending on the sediment (e.g., composition, color, texture, porosity, etc.) and ichnological features (size of traces, infilling material, burrow wall, etc.). The IDIAP provides an estimation of the amount of trace fossils produced by a particular ichnotaxon, by a whole ichnocoenosis or even for a complete ichnofabric. We recommend the application of the complete IDIAP to a given case study, followed by selection of the most appropriate method. The IDIAP was applied to core material recovered from the IODP Expedition 339, enabling us, for the first time, to arrive at a quantitative estimation of the discrete trace fossil assemblage in core samples.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The mode I and mode II fracture toughness and the critical strain energy release rate for different concrete-concrete jointed interfaces are experimentally determined using the Digital Image Correlation technique. Concrete beams having different compressive strength materials on either side of a centrally placed vertical interface are prepared and tested under three-point bending in a closed loop servo-controlled testing machine under crack mouth opening displacement control. Digital images are captured before loading (undeformed state) and at different instances of loading. These images are analyzed using correlation techniques to compute the surface displacements, strain components, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It is seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.