971 resultados para IMAGE SERIES
Resumo:
Fiber-reinforced plastics (FRPs) are typically difficult to machine due to their highly heterogeneous and anisotropic nature and the presence of two phases (fiber and matrix) with vastly different strengths and stiffnesses. Typical machining damage mechanisms in FRPs include series of brittle fractures (especially for thermosets) due to shearing and cracking of matrix material, fiber pull-outs, burring, fuzzing, fiber-matrix debonding, etc. With the aim of understanding the influence of the pronounced heterogeneity and anisotropy observed in FRPs, ``Idealized'' Carbon FRP (I-CFRP) plates were prepared using epoxy resin with embedded equispaced tows of carbon fibers. Orthogonal cutting of these I-CFRPs was carried out, and the chip formation characteristics, cutting force signals and strain distributions obtained during machining were analyzed using the Digital Image Correlation (DIC) technique. In addition, the same procedure was repeated on Uni-Directional CFRPs (UD-CFRPs). Chip formation mechanisms in FRPs were found to depend on the depth of cut and fiber orientation with pure epoxy showing a pronounced ``size effect.'' Experimental results indicate that in-situ full field strain measurements from DIC coupled with force measurements using dynamometry provide an adequate measure of anisotropy and heterogeneity during orthogonal cutting.
Resumo:
This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.
Resumo:
Among the multiple advantages and applications of remote sensing, one of the most important uses is to solve the problem of crop classification, i.e., differentiating between various crop types. Satellite images are a reliable source for investigating the temporal changes in crop cultivated areas. In this letter, we propose a novel bat algorithm (BA)-based clustering approach for solving crop type classification problems using a multispectral satellite image. The proposed partitional clustering algorithm is used to extract information in the form of optimal cluster centers from training samples. The extracted cluster centers are then validated on test samples. A real-time multispectral satellite image and one benchmark data set from the University of California, Irvine (UCI) repository are used to demonstrate the robustness of the proposed algorithm. The performance of the BA is compared with two other nature-inspired metaheuristic techniques, namely, genetic algorithm and particle swarm optimization. The performance is also compared with the existing hybrid approach such as the BA with K-means. From the results obtained, it can be concluded that the BA can be successfully applied to solve crop type classification problems.
Resumo:
Fingerprints are used for identification in forensics and are classified into Manual and Automatic. Automatic fingerprint identification system is classified into Latent and Exemplar. A novel Exemplar technique of Fingerprint Image Verification using Dictionary Learning (FIVDL) is proposed to improve the performance of low quality fingerprints, where Dictionary learning method reduces the time complexity by using block processing instead of pixel processing. The dynamic range of an image is adjusted by using Successive Mean Quantization Transform (SMQT) technique and the frequency domain noise is reduced using spectral frequency Histogram Equalization. Then, an adaptive nonlinear dynamic range adjustment technique is utilized to determine the local spectral features on corresponding fingerprint ridge frequency and orientation. The dictionary is constructed using spatial fundamental frequency that is determined from the spectral features. These dictionaries help in removing the spurious noise present in fingerprints and reduce the time complexity by using block processing instead of pixel processing. Further, dictionaries are used to reconstruct the image for matching. The proposed FIVDL is verified on FVC database sets and Experimental result shows an improvement over the state-of-the-art techniques. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
The bilateral filter is known to be quite effective in denoising images corrupted with small dosages of additive Gaussian noise. The denoising performance of the filter, however, is known to degrade quickly with the increase in noise level. Several adaptations of the filter have been proposed in the literature to address this shortcoming, but often at a substantial computational overhead. In this paper, we report a simple pre-processing step that can substantially improve the denoising performance of the bilateral filter, at almost no additional cost. The modified filter is designed to be robust at large noise levels, and often tends to perform poorly below a certain noise threshold. To get the best of the original and the modified filter, we propose to combine them in a weighted fashion, where the weights are chosen to minimize (a surrogate of) the oracle mean-squared-error (MSE). The optimally-weighted filter is thus guaranteed to perform better than either of the component filters in terms of the MSE, at all noise levels. We also provide a fast algorithm for the weighted filtering. Visual and quantitative denoising results on standard test images are reported which demonstrate that the improvement over the original filter is significant both visually and in terms of PSNR. Moreover, the denoising performance of the optimally-weighted bilateral filter is competitive with the computation-intensive non-local means filter.
Resumo:
We address the problem of denoising images corrupted by multiplicative noise. The noise is assumed to follow a Gamma distribution. Compared with additive noise distortion, the effect of multiplicative noise on the visual quality of images is quite severe. We consider the mean-square error (MSE) cost function and derive an expression for an unbiased estimate of the MSE. The resulting multiplicative noise unbiased risk estimator is referred to as MURE. The denoising operation is performed in the wavelet domain by considering the image-domain MURE. The parameters of the denoising function (typically, a shrinkage of wavelet coefficients) are optimized for by minimizing MURE. We show that MURE is accurate and close to the oracle MSE. This makes MURE-based image denoising reliable and on par with oracle-MSE-based estimates. Analogous to the other popular risk estimation approaches developed for additive, Poisson, and chi-squared noise degradations, the proposed approach does not assume any prior on the underlying noise-free image. We report denoising results for various noise levels and show that the quality of denoising obtained is on par with the oracle result and better than that obtained using some state-of-the-art denoisers.
Resumo:
The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A common form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires O(sigma(2)) operations per pixel, where sigma is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approximation algorithm that can cut down the computational complexity to O(1) per pixel for any arbitrary sigma (constant-time implementation). This is based on the observation that the range kernel operates via the translations of a fixed Gaussian over the range space, and that these translated Gaussians can be accurately approximated using the so-called Gauss-polynomials. The overall algorithm emerging from this approximation involves a series of spatial Gaussian filtering, which can be efficiently implemented (in parallel) using separability and recursion. We present some preliminary results to demonstrate that the proposed algorithm compares favorably with some of the existing fast algorithms in terms of speed and accuracy.
Resumo:
对薄板成形应变场传统的测量方法进行了研究,指出了其不足和误差的来源,提出了数字图像分析法测量薄板成形中的应变场,对测量原理、新的测量方法对传统方法的改进,以及如何降低误差进行了介绍,指出数字图像分析法的前景,提出了改进意见。
Resumo:
The distribution of cortical bone in the proximal femur is believed to be a critical component in determining fracture resistance. Current CT technology is limited in its ability to measure cortical thickness, especially in the sub-millimetre range which lies within the point spread function of today's clinical scanners. In this paper, we present a novel technique that is capable of producing unbiased thickness estimates down to 0.3mm. The technique relies on a mathematical model of the anatomy and the imaging system, which is fitted to the data at a large number of sites around the proximal femur, producing around 17,000 independent thickness estimates per specimen. In a series of experiments on 16 cadaveric femurs, estimation errors were measured as -0.01+/-0.58mm (mean+/-1std.dev.) for cortical thicknesses in the range 0.3-4mm. This compares with 0.25+/-0.69mm for simple thresholding and 0.90+/-0.92mm for a variant of the 50% relative threshold method. In the clinically relevant sub-millimetre range, thresholding increasingly fails to detect the cortex at all, whereas the new technique continues to perform well. The many cortical thickness estimates can be displayed as a colour map painted onto the femoral surface. Computation of the surfaces and colour maps is largely automatic, requiring around 15min on a modest laptop computer.
Resumo:
A new particle image technique was developed to analyze the dispersion of tracer particles in an internally circulating fluidized bed (ICFB). The movement course and the concentration distribution of tracer particles in the bed were imaged and the degree of inhomogeneity of tracer particles was analyzed. The lateral and axial dispersion coefficients of particles were calculated for various zones in ICFB. Results indicate that the lateral diffusion coefficient in the fluidized bed with uneven air distribution is significantly higher than that in uniform bubbling beds with even air distribution. The dispersion coefficients are different along bed length and height.
Resumo:
This paper proposes to use an extended Gaussian Scale Mixtures (GSM) model instead of the conventional ℓ1 norm to approximate the sparseness constraint in the wavelet domain. We combine this new constraint with subband-dependent minimization to formulate an iterative algorithm on two shift-invariant wavelet transforms, the Shannon wavelet transform and dual-tree complex wavelet transform (DTCWT). This extented GSM model introduces spatially varying information into the deconvolution process and thus enables the algorithm to achieve better results with fewer iterations in our experiments. ©2009 IEEE.
Resumo:
Se realizó un estudio con el propósito de obtener información acerca de le Erodabilidad del suelo, así como distinguir un método confiable y sencillo pera la determinación de ésta. Se seleccionaron cuatro Series de suelos (San Ignacio, Nejapa, Esquipulas y Zambrano), ubicadas en le Cuenca Sur del Lago de Managua, en base a una recopilación de información existente (topografía, reconocimiento y caracterización del terreno), se procedió a obtener les pérdidas de suelo, escurrimiento superficial, concentración de sedimentos y el Índice de Erodabilidad de cada Serie de suelos, por medio de un Mini-simulador de lluvia de Erodabilidad por Kamphorst (1987). El factor de erodabilidad (K) se obtuvo a través de cuatro propiedades del suelo (textura, materia orgánica, estructura y permeabilidad), cuyos valores son introducidos en el Nomograma de Wischmeier (1971). Una vez obtenidos los Índices de erodabilidad (I.K.) y Factor de erodabilidad (K) se determinó que las cuatro Series muestran diferentes grados de susceptibilidad e le erosión. Además, las pérdidas de suelo, escurrimiento superficial, concentración de sedimentos, el índice de erodabilidad y el Factor de erodabilidad son influenciadas por la materia orgánica. Así también se comprobó que el comparar el Índice y el Factor de erodabilidad, tienen un alto valor de correlación. Se pudo observar que le textura no tiene una influencia directa sobre les pérdidas de suelo, escurrimiento superficial, Índice de erodabilidad y Factor de erodabilidad.