988 resultados para digital image correlation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to design and process-related factors, there are local variations in the microstructure and mechanical behaviour of cast components. This work establishes a Digital Image Correlation (DIC) based method for characterisation and investigation of the effects of such local variations on the behaviour of a high pressure, die cast (HPDC) aluminium alloy. Plastic behaviour is studied using gradient solidified samples and characterisation models for the parameters of the Hollomon equation are developed, based on microstructural refinement. Samples with controlled microstructural variations are produced and the observed DIC strain field is compared with Finite Element Method (FEM) simulation results. The results show that the DIC based method can be applied to characterise local mechanical behaviour with high accuracy. The microstructural variations are observed to cause a redistribution of strain during tensile loading. This redistribution of strain can be predicted in the FEM simulation by incorporating local mechanical behaviour using the developed characterization model. A homogeneous FEM simulation is unable to predict the observed behaviour. The results motivate the application of a previously proposed simulation strategy, which is able to predict and incorporate local variations in mechanical behaviour into FEM simulations already in the design process for cast components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Jet impingement erosion test rig has been used to erode titanium alloy specimens (Ti-4Al-4V). Eroded surface profiles have been obtained by vertical sectioning method for light microscopy observation. Mixed fractals have been measured from profile images by a digital image processing and analysis technique. The use of this technique allows glimpsing a quantitative correlation among material properties, fractal surface topography and erosion phenomena. (C) 2002 Elsevier B.V. B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is an example of the improvement on quantitative fractography by means of digital image processing and light microscopy. Two techniques are presented to investigate the quantitative fracture behavior of Ti-4Al-4V heat-treated alloy specimens, under Charpy impact testing. The first technique is the Minkowski method for fractal dimension measurement from surface profiles, revealing the multifractal character of Ti-4Al-4V fracture. It was not observed a clear positive correlation of fractal values against Charpy energies for Ti-4Al-4V alloy specimens, due to their ductility, microstructural heterogeneities and the dynamic loading characteristics at region near the V-notch. The second technique provides an entire elevation map of fracture surface by extracting in-focus regions for each picture from a stack of images acquired at successive focus positions, then computing the surface roughness. Extended-focus reconstruction has been used to explain the behavior along fracture surface. Since these techniques are based on light microscopy, their inherent low cost is very interesting for failure investigations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To evaluate the influence of alternative erasing times of DenOptix (R) (Dentsply/Gendex, Chicargo, IL) digital plates oil subjective image quality and the probability of double exposure image not Occurring.Methods: Human teeth were X-rayed with phosphor plates using tell different erasing times. Two observers evaluated the images for subjective Image quality (sharpness, brightness, contrast, enamel definition, dentin definition and dentin-enamal Junction definition) and for the presence or absence of double exposure image. Spearman's correlation analysis and ANOVA was performed to verify the existence ora linear association between the subjective image quality parameters and the alternative erasing times. A contingency table was constructed to evaluate the agreement among the observers, and a binominal logistic regression was performed to verify the correlation between the erasing time and the probability of double exposure image not occurring.Results: All 6 parameters or image quality were rated high by the examiners for the erasing times between 25 s and 130 s. The same erasing time range, from 25 to 130 s, was considered a safe erasing time interval, with no probability of a double exposure image Occurring.Conclusions: The alternative erasing times from 25 s to 130 s showed high quality and no probability of double image Occurrence. Thus, it is possible to reduce the operating time or the DenOptix (R) digital system Without jeopardizing the diagnostic task.Dentomaxillofacial Radiology (2010) 39, 23-27. doi: 10.1259/dmfr/49065239.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is presented two study cases about the approach in root analysis at field and laboratory conditions based on digital image analysis. Grapevine (Vitis vinifera L.) and date palm (Phoenix dactylifera L.) root systems were analyzed by both the monolith and trench wall method aided by digital image analysis. Correlation between root parameters and their fractional distribution over the soil profile were obtained, as well as the root diameter estimation. Results have shown the feasibility of digital image analysis for evaluation of root distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital techniques have been developed and validated to assess semiquantitatively immunohistochemical nuclear staining. Currently visual classification is the standard for qualitative nuclear evaluation. Analysis of pixels that represents the immunohistochemical labeling can be more sensitive, reproducible and objective than visual grading. This study compared two semiquantitative techniques of digital image analysis with three techniques of visual analysis imaging to estimate the p53 nuclear immunostaining. Methods: Sixty-three sun-exposed forearm-skin biopsies were photographed and submitted to three visual analyses of images: the qualitative visual evaluation method (0 to 4 +), the percentage of labeled nuclei and HSCORE. Digital image analysis was performed using ImageJ 1.45p; the density of nuclei was scored per ephitelial area (DensNU) and the pixel density was established in marked suprabasal epithelium (DensPSB). Results: Statistical significance was found in: the agreement and correlation among the visual estimates of evaluators, correlation among the median visual score of the evaluators, the HSCORE and the percentage of marked nuclei with the DensNU and DensPSB estimates. DensNU was strongly correlated to the percentage of p53-marked nuclei in the epidermis, and DensPSB with the HSCORE. Conclusion: The parameters presented herein can be applied in routine analysis of immunohistochemical nuclear staining of epidermis. © 2012 John Wiley & Sons A/S.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a key based generic model for digital image watermarking. The model aims at addressing an identified gap in the literature by providing a basis for assessing different watermarking requirements in various digital image applications. We start with a formulation of a basic watermarking system, and define system inputs and outputs. We then proceed to incorporate the use of keys in the design of various system components. Using the model, we also define a few fundamental design and evaluation parameters. To demonstrate the significance of the proposed model, we provide an example of how it can be applied to formally define common attacks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In most of the digital image watermarking schemes, it becomes a common practice to address security in terms of robustness, which is basically a norm in cryptography. Such consideration in developing and evaluation of a watermarking scheme may severely affect the performance and render the scheme ultimately unusable. This paper provides an explicit theoretical analysis towards watermarking security and robustness in figuring out the exact problem status from the literature. With the necessary hypotheses and analyses from technical perspective, we demonstrate the fundamental realization of the problem. Finally, some necessary recommendations are made for complete assessment of watermarking security and robustness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2010, the State Library of Queensland (SLQ) donated their out-of-copyright Queensland images to Wikimedia Commons. One direct effect of publishing the collections at Wikimedia Commons is the ability of general audiences to participate and help the library in processing the images in the collection. This paper will discuss a project that explored user participation in the categorisation of the State Library of Queensland digital image collections. The outcomes of this project can be used to gain a better understanding of user participation that lead to improving access to library digital collections. Two techniques for data collection were used: documents analysis and interview. Document analysis was performed on the Wikimedia Commons monthly reports. Meanwhile, interview was used as the main data collection technique in this research. The data collected from document analysis was used to help the researchers to devise appropriate questions for interviews. The interviews were undertaken with participants who were divided into two groups: SLQ staff members and Wikimedians (users who participate in Wikimedia). The two sets of data collected from participants were analysed independently and compared. This method was useful for the researchers to understand the differences between the experiences of categorisation from both the librarians’ and the users’ perspectives. This paper will provide a discussion on the preliminary findings that have emerged from each group participant. This research provides preliminary information about the extent of user participation in the categorisation of SLQ collections in Wikimedia Commons that can be used by SLQ and other interested libraries in describing their digital content by their categorisations to improve user access to the collection in the future.