980 resultados para digital image correlation
Resumo:
Jet impingement erosion test rig has been used to erode titanium alloy specimens (Ti-4Al-4V). Eroded surface profiles have been obtained by vertical sectioning method for light microscopy observation. Mixed fractals have been measured from profile images by a digital image processing and analysis technique. The use of this technique allows glimpsing a quantitative correlation among material properties, fractal surface topography and erosion phenomena. (C) 2002 Elsevier B.V. B.V. All rights reserved.
Resumo:
This work is an example of the improvement on quantitative fractography by means of digital image processing and light microscopy. Two techniques are presented to investigate the quantitative fracture behavior of Ti-4Al-4V heat-treated alloy specimens, under Charpy impact testing. The first technique is the Minkowski method for fractal dimension measurement from surface profiles, revealing the multifractal character of Ti-4Al-4V fracture. It was not observed a clear positive correlation of fractal values against Charpy energies for Ti-4Al-4V alloy specimens, due to their ductility, microstructural heterogeneities and the dynamic loading characteristics at region near the V-notch. The second technique provides an entire elevation map of fracture surface by extracting in-focus regions for each picture from a stack of images acquired at successive focus positions, then computing the surface roughness. Extended-focus reconstruction has been used to explain the behavior along fracture surface. Since these techniques are based on light microscopy, their inherent low cost is very interesting for failure investigations.
Resumo:
Objective: To evaluate the influence of alternative erasing times of DenOptix (R) (Dentsply/Gendex, Chicargo, IL) digital plates oil subjective image quality and the probability of double exposure image not Occurring.Methods: Human teeth were X-rayed with phosphor plates using tell different erasing times. Two observers evaluated the images for subjective Image quality (sharpness, brightness, contrast, enamel definition, dentin definition and dentin-enamal Junction definition) and for the presence or absence of double exposure image. Spearman's correlation analysis and ANOVA was performed to verify the existence ora linear association between the subjective image quality parameters and the alternative erasing times. A contingency table was constructed to evaluate the agreement among the observers, and a binominal logistic regression was performed to verify the correlation between the erasing time and the probability of double exposure image not occurring.Results: All 6 parameters or image quality were rated high by the examiners for the erasing times between 25 s and 130 s. The same erasing time range, from 25 to 130 s, was considered a safe erasing time interval, with no probability of a double exposure image Occurring.Conclusions: The alternative erasing times from 25 s to 130 s showed high quality and no probability of double image Occurrence. Thus, it is possible to reduce the operating time or the DenOptix (R) digital system Without jeopardizing the diagnostic task.Dentomaxillofacial Radiology (2010) 39, 23-27. doi: 10.1259/dmfr/49065239.
Resumo:
It is presented two study cases about the approach in root analysis at field and laboratory conditions based on digital image analysis. Grapevine (Vitis vinifera L.) and date palm (Phoenix dactylifera L.) root systems were analyzed by both the monolith and trench wall method aided by digital image analysis. Correlation between root parameters and their fractional distribution over the soil profile were obtained, as well as the root diameter estimation. Results have shown the feasibility of digital image analysis for evaluation of root distribution.
Resumo:
Digital techniques have been developed and validated to assess semiquantitatively immunohistochemical nuclear staining. Currently visual classification is the standard for qualitative nuclear evaluation. Analysis of pixels that represents the immunohistochemical labeling can be more sensitive, reproducible and objective than visual grading. This study compared two semiquantitative techniques of digital image analysis with three techniques of visual analysis imaging to estimate the p53 nuclear immunostaining. Methods: Sixty-three sun-exposed forearm-skin biopsies were photographed and submitted to three visual analyses of images: the qualitative visual evaluation method (0 to 4 +), the percentage of labeled nuclei and HSCORE. Digital image analysis was performed using ImageJ 1.45p; the density of nuclei was scored per ephitelial area (DensNU) and the pixel density was established in marked suprabasal epithelium (DensPSB). Results: Statistical significance was found in: the agreement and correlation among the visual estimates of evaluators, correlation among the median visual score of the evaluators, the HSCORE and the percentage of marked nuclei with the DensNU and DensPSB estimates. DensNU was strongly correlated to the percentage of p53-marked nuclei in the epidermis, and DensPSB with the HSCORE. Conclusion: The parameters presented herein can be applied in routine analysis of immunohistochemical nuclear staining of epidermis. © 2012 John Wiley & Sons A/S.
Resumo:
Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.
Resumo:
This paper presents a key based generic model for digital image watermarking. The model aims at addressing an identified gap in the literature by providing a basis for assessing different watermarking requirements in various digital image applications. We start with a formulation of a basic watermarking system, and define system inputs and outputs. We then proceed to incorporate the use of keys in the design of various system components. Using the model, we also define a few fundamental design and evaluation parameters. To demonstrate the significance of the proposed model, we provide an example of how it can be applied to formally define common attacks.
Resumo:
In most of the digital image watermarking schemes, it becomes a common practice to address security in terms of robustness, which is basically a norm in cryptography. Such consideration in developing and evaluation of a watermarking scheme may severely affect the performance and render the scheme ultimately unusable. This paper provides an explicit theoretical analysis towards watermarking security and robustness in figuring out the exact problem status from the literature. With the necessary hypotheses and analyses from technical perspective, we demonstrate the fundamental realization of the problem. Finally, some necessary recommendations are made for complete assessment of watermarking security and robustness.
Resumo:
In 2010, the State Library of Queensland (SLQ) donated their out-of-copyright Queensland images to Wikimedia Commons. One direct effect of publishing the collections at Wikimedia Commons is the ability of general audiences to participate and help the library in processing the images in the collection. This paper will discuss a project that explored user participation in the categorisation of the State Library of Queensland digital image collections. The outcomes of this project can be used to gain a better understanding of user participation that lead to improving access to library digital collections. Two techniques for data collection were used: documents analysis and interview. Document analysis was performed on the Wikimedia Commons monthly reports. Meanwhile, interview was used as the main data collection technique in this research. The data collected from document analysis was used to help the researchers to devise appropriate questions for interviews. The interviews were undertaken with participants who were divided into two groups: SLQ staff members and Wikimedians (users who participate in Wikimedia). The two sets of data collected from participants were analysed independently and compared. This method was useful for the researchers to understand the differences between the experiences of categorisation from both the librarians’ and the users’ perspectives. This paper will provide a discussion on the preliminary findings that have emerged from each group participant. This research provides preliminary information about the extent of user participation in the categorisation of SLQ collections in Wikimedia Commons that can be used by SLQ and other interested libraries in describing their digital content by their categorisations to improve user access to the collection in the future.
Resumo:
Evaluates trends in the imagery built into GIS applications to supplement existing vector data of streets, boundaries, infrastructure and utilities. These include large area digital orthophotos, Landsat and SPOT data. Future developments include 3 to 5 metre pixel resolutions from satellites, 1 to 2 metres from aircraft. GPS and improved image analysis techniques will also assist in improving resolution and accuracy.
Resumo:
Quantitative determination of modification of primary sediment features, by the activity of organisms (i.e., bioturbation) is essential in geosciences. Some methods proposed since the 1960s are mainly based on visual or subjective determinations. The first semiquantitative evaluations of the Bioturbation Index, Ichnofabric Index, or the amount of bioturbation were attempted, in the best cases using a series of flashcards designed in different situations. Recently, more effective methods involve the use of analytical and computational methods such as X-rays, magnetic resonance imaging or computed tomography; these methods are complex and often expensive. This paper presents a compilation of different methods, using Adobe® Photoshop® software CS6, for digital estimation that are a part of the IDIAP (Ichnological Digital Analysis Images Package), which is an inexpensive alternative to recently proposed methods, easy to use, and especially recommended for core samples. The different methods — “Similar Pixel Selection Method (SPSM)”, “Magic Wand Method (MWM)” and the “Color Range Selection Method (CRSM)” — entail advantages and disadvantages depending on the sediment (e.g., composition, color, texture, porosity, etc.) and ichnological features (size of traces, infilling material, burrow wall, etc.). The IDIAP provides an estimation of the amount of trace fossils produced by a particular ichnotaxon, by a whole ichnocoenosis or even for a complete ichnofabric. We recommend the application of the complete IDIAP to a given case study, followed by selection of the most appropriate method. The IDIAP was applied to core material recovered from the IODP Expedition 339, enabling us, for the first time, to arrive at a quantitative estimation of the discrete trace fossil assemblage in core samples.